Re: [PATCH 1/3] Make /dev/urandom scalable

From: Austin S Hemmelgarn
Date: Thu Sep 24 2015 - 12:01:13 EST


On 2015-09-24 09:12, Theodore Ts'o wrote:
On Thu, Sep 24, 2015 at 07:37:39AM -0400, Austin S Hemmelgarn wrote:
Using /dev/urandom directly, yes that doesn't make sense because it
consistent returns non-uniformly random numbers when used to generate larger
amounts of entropy than the blocking pool can source

Why do you think this is the case? Reproduction, please?

- Ted
Aside from the literature scattered across the web and the fact that it fails Dieharder tests way more than a high quality RNG should (even a good one should fail from time to time, one that never does is inherently flawed for other reasons, but I've had cases where I've done thousands of dieharder runs, and it failed almost 10% of the time, while stuff like mt19937 fails in otherwise identical tests only about 1-2% of the time)? I will admit that it is significantly better than any libc implementation of rand() that I've seen, and many other PRNG's (notably including being significantly more random than the FIPS 140 DRBG's), but it does not do as well (usually) as stuff like OpenBSD's /dev/aranedom (which is way more processor intensive as well from what I've seen) or some of the high quality RNG's found in the GSL.

And it's also worth noting that this is with regards to systems that are consistently getting significantly less entropy into the blocking pool than is being sourced from the non-blocking pool by userspace (that is greater than a 100 times or so).

In short, I would not trust it as a CSPRNG (although I wouldn't trust most things touted as CSPRNG's either), or even for important simulations that need _lots_ of random numbers. I'm not saying that it shouldn't be used for stuff like seeding other PRNG's however (and TBH, I do trust it more for that than I trust stuff like RDSEED or RDRAND).

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature