Re: [PATCH 1/3] Make /dev/urandom scalable

From: Austin S Hemmelgarn
Date: Tue Sep 29 2015 - 07:57:45 EST


On 2015-09-25 15:07, Austin S Hemmelgarn wrote:
On 2015-09-25 07:41, Austin S Hemmelgarn wrote:
On 2015-09-24 16:14, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 03:11:23PM -0400, Austin S Hemmelgarn wrote:
That is a startling result. Please say what architecture, kernel
version, dieharder version and commandline arguments you are using to
get 10% WEAK or FAILED assessments from dieharder on /dev/urandom.

I do not remember what exact dieharder version or command-line
arguments
(this was almost a decade ago), except that I compiled it from source
myself, I do remember it was a 32-bit x86 processor (as that was
sadly all I
had to run Linux on at the time), and an early 2.6 series kernel
(which if I
remember correctly was already EOL by the time I was using it).

It might have been nice if you had said this from the beginning
instead of making an unqualified statement with the assumption that it
was applicable to kernels likely to be used today in non-obsolete
systems. Otherwise it risks generating a click-bait article on
Phoronix that would get people really worried for no good reason...
I sincerely apologize about this, I should have been more specific right
from the beginning (I need to get better about that when talking to
people, I'm so used to dealing with some of my friends who couldn't
event tell you the difference between RAM and a hard drive, think a bus
is only something you use for transportation, and get confused when I
try to properly explain even relatively simple CS and statistics
concepts).

There was a bug a long, long time ago (which where we weren't doing
sufficient locking and if two processes raced reading from
/dev/urandom at the same time, it was possible that the two processes
would get the same value read out from /dev/urandom). This was fixed
a long time ago, though, and in fact the scalability problem which
Andi is trying to fix was caused by that extra locking that was
added. :-)

It's possible that is what you saw. I don't know, since there was no
reproduction information to back up your rather startling claim.
I don't think this was what I hit, I'm pretty sure I had serialized the
dieharder runs.

If you can reproduce consistent Dieharder failures, please do let us
know with detailed reproduction instructures.
Will do.
OK, just started a couple of runs in parallel using different generators
using the following command line:
dieharder -a -m 32 -k 1 -Y 1 -g XXX
with one each for:
/dev/urandom (502)
AES_OFB (205)
glibc random() (039)
mt19937 (013)
The above command line will run all dieharder tests with 12800 psamples,
using a higher than default precision, and re-running tests that return
WEAK until it gets a PASS or FAIL. Even on the relatively fast (at
least, fast for a desktop) system I'm running them on, I expect it will
take quite some time to finish (although regardless of that I'm probably
not going to be getting back to it until Monday).

Interestingly, based on what dieharder is already saying about
performance, /dev/urandom is slower than AES_OFB (at least, on this
particular system, happy to provide hardware specs if someone wants).

Apologies for not replying yesterday like I said I would.

I actually didn't get a chance to run the tests to completion as the wifi card in the system I was running the tests on lost it's mind about 55 hours in and I had to cold reboot the system to reset it. I would give the results here, except that I have a feeling that people probably don't want 110kb of data in the e-mail body, and thunderbird is for some reason choking on trying to attach files. In general, the results were pretty typical of a good PRNG, performance differences not withstanding. In other words, don't use /dev/urandom except for seeding other PRNG's, but because of the speed, not the quality.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature