On Sun, 19 Aug 2001, Theodore Tso wrote:
> The bottom line is it really depends on how paranoid you want to be,
> and how much and how closely you want /dev/random to reliably replace
> a true hardware random number generator which relies on some physical
> process (by measuring quantum noise using a noise diode, or by
> measuring radioactive decay). For most purposes, and against most
> adversaries, it's probably acceptable to depend on network interrupts,
> even if the entropy estimator may be overestimating things.
Can I propose an add_untrusted_randomness()? This would work identically
to add_timer_randomness but would pass batch_entropy_store() 0 as the
entropy estimate. The store would then be made to drop 0-entropy elements
on the floor if the queue was more than, say, half full. This would let us
take advantage of 'potential' entropy sources like network interrupts and
strengthen /dev/urandom without weakening /dev/random.
(Yes, I see dont_count_entropy, but it doesn't appear to be used, and
doesn't address flooding the queue with 0-entropy entries. I'd take it
out.)
-- "Love the dolphins," she advised him. "Write by W.A.S.T.E.."- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Aug 23 2001 - 21:00:32 EST