monitoring entropy

Colin Plumb (colin@nyx.net)
Tue, 14 Oct 1997 21:10:42 -0600 (MDT)


Given that draining the entropy pool is only a minor denial-of-service
attack (a fork bomb is a much more effective one), is this really worth
worrying about? /dev/urandom is plenty good for any conceivable practical
application, and that can't be denied.

I'd like to emphasize that: for all practical purposes, /dev/urandom
will deliver an infinite amount of random data unpredictable to
any attacker who is not spying on you as you generate it.

/dev/random provides a stronger guarantee: it is unpredictable to
an attacker with *infinite* computational power. Doing this requires
that you have as much entropy put into the accumulation system as you
try to read out, so an attacker's uncertainty in the output can be traced
back to their uncertainty in the input.

To provide a "reserved" pool with a guarantee of this strength requires
that some input entropy be set aside for root-only use and not used to
generate user random output. Which takes up more data space in the
kernel and makes the output weaker.

Doing this without slowing down the entropy-gathering operation (which
is triggered each interrupt, so it has to be *fast*) is also tricky.

Frankly, I don't see the point.

Oh, a question for folks who understand the multi-platform coding style.
On the Pentium, /dev/random takes advnatage of the clock cycle counter
to get as much timing information as posible. Many other processors
have this too (Alpha, MIPS, PowerPC, ...) and it would be nice for it
to work on them too. Is there a better way to do this than to have
a zillion #ifdefs? It's just a couple of lines of inline asm per
platform. Should the function go into <asm/foo.h> somewhere? What
should it be called, and where should it go?

-- 
	-Colin