Re: Large post detailing recent Linux RNG improvements

From: Sandy Harris
Date: Thu Mar 24 2022 - 06:29:46 EST


Sandy Harris <sandyinchina@xxxxxxxxx> wrote:

> Jason A. Donenfeld <Jason@xxxxxxxxx> wrote:
>
> > Thought I should mention here that I've written up the various RNG
> > things I've been working on for 5.17 & 5.18 here:
> > https://www.zx2c4.com/projects/linux-rng-5.17-5.18/ .
> >
> > Feel free to discuss on list here if you'd like, or if you see
> > something you don't like, I'll happily review patches!
>
> Your code includes:
>
> enum {
> POOL_BITS = BLAKE2S_HASH_SIZE * 8,
> POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */
> };
>
> static struct {
> struct blake2s_state hash;
> spinlock_t lock;
> unsigned int entropy_count;
> } input_pool = {
> .hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
> BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
> BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 },
> .hash.outlen = BLAKE2S_HASH_SIZE,
> .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
> };
>
> As far as I can tell, you have eliminated the 4K-bit input pool
> that this driver has always used & are just using the hash
> context as the input pool. To me, this looks like an error.
>
> A side effect of that is losing the latent-entropy attribute
> on input_pool[] so we no longer get initialisation from
> the plugin. Another error.

I could see reasonable arguments for reducing the size of
the input pool since that would save both kernel memory
and time used by the hash. Personally, though, I would
not consider anything < 2Kbits without seeing strong
arguments to justify it.

You seem to have gone to 512 bits without showing
any analysis to justify it. Have I just missed them?