On Thu, Sep 24, 2015 at 07:37:39AM -0400, Austin S Hemmelgarn wrote:Aside from the literature scattered across the web and the fact that it fails Dieharder tests way more than a high quality RNG should (even a good one should fail from time to time, one that never does is inherently flawed for other reasons, but I've had cases where I've done thousands of dieharder runs, and it failed almost 10% of the time, while stuff like mt19937 fails in otherwise identical tests only about 1-2% of the time)? I will admit that it is significantly better than any libc implementation of rand() that I've seen, and many other PRNG's (notably including being significantly more random than the FIPS 140 DRBG's), but it does not do as well (usually) as stuff like OpenBSD's /dev/aranedom (which is way more processor intensive as well from what I've seen) or some of the high quality RNG's found in the GSL.
Using /dev/urandom directly, yes that doesn't make sense because it
consistent returns non-uniformly random numbers when used to generate larger
amounts of entropy than the blocking pool can source
Why do you think this is the case? Reproduction, please?
- Ted
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature