On Tue, Dec 04, 2007 at 08:54:52AM -0800, Ray Lee wrote:In another post I suggested having a minimum bound (use not entropy) and a maximum bound (grab some entropy) with the idea that between these values some limited entropy could be used. I have to wonder if the entropy available is at least as unpredictable as the entropy itself.(Why hasn't anyone been cc:ing Matt on this?)
On Dec 4, 2007 8:18 AM, Adrian Bunk <bunk@xxxxxxxxxx> wrote:On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:You seem to be confused. He's not talking about changing any userspace
While debugging Exim4's GnuTLS interface, I recently found out thatman 4 random
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.
This also means that I can "sabotage" applications reading fromThe bug would be closed as invalid.
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.
Before I file a bug on bugzilla,
...
No matter what you consider as being better, changing a 12 years old and
widely used userspace interface like /dev/urandom is simply not an
option.
interface, merely how the /dev/urandom data is generated.
For Matt's benefit, part of the original posting:
Before I file a bug on bugzilla, can I ask why /dev/urandom wasn'tA PRNG is clearly unacceptable. But roughly restated, why not have
implemented as a PRNG which is periodically (say, every 1024 bytes or
even more) seeded from /dev/random? That way, /dev/random has a much
higher chance of holding enough entropy for applications that really
need "good" entropy.
/dev/urandom supply merely cryptographically strong random numbers,
rather than a mix between the 'true' random of /dev/random down to the
cryptographically strong stream it'll provide when /dev/random is
tapped? In principle, this'd leave more entropy available for
applications that really need it, especially on platforms that don't
generate a lot of entropy in the first place (servers).
The original /dev/urandom behavior was to use all the entropy that was
available, and then degrade into a pure PRNG when it was gone. The
intent is for /dev/urandom to be precisely as strong as /dev/random
when entropy is readily available.
The current behavior is to deplete the pool when there is a large
amount of entropy, but to always leave enough entropy for /dev/random
to be read. This means we never completely starve the /dev/random
side. The default amount is twice the read wakeup threshold (128
bits), settable in /proc/sys/kernel/random/.
But there's really not much point in changing this threshold. IfRight, my thought is to throttle user + urandom use such that the total stays below the available entropy. I had forgotten that that was a lower bound, although it's kind of an on-off toggle rather than proportional. Clearly if you care about this a *lot* you will use a hardware RNG.
you're reading the /dev/random side at the same rate or more often
that entropy is appearing, you'll run out regardless of how big your
buffer is.