Re: [PATCH] random: always use batched entropy for get_random_u{32,64}

From: Greg Kroah-Hartman
Date: Sun Feb 16 2020 - 13:24:05 EST


On Sun, Feb 16, 2020 at 05:18:36PM +0100, Jason A. Donenfeld wrote:
> It turns out that RDRAND is pretty slow. Comparing these two
> constructions:
>
> for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret))
> arch_get_random_long(&ret);
>
> and
>
> long buf[CHACHA_BLOCK_SIZE / sizeof(long)];
> extract_crng((u8 *)buf);
>
> it amortizes out to 352 cycles per long for the top one and 107 cycles
> per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H.
>
> And importantly, the top one has the drawback of not benefiting from the
> real rng, whereas the bottom one has all the nice benefits of using our
> own chacha rng. As get_random_u{32,64} gets used in more places (perhaps
> beyond what it was originally intended for when it was introduced as
> get_random_{int,long} back in the md5 monstrosity era), it seems like it
> might be a good thing to strengthen its posture a tiny bit. Doing this
> should only be stronger and not any weaker because that pool is already
> initialized with a bunch of rdrand data (when available). This way, we
> get the benefits of the hardware rng as well as our own rng.
>
> Another benefit of this is that we no longer hit pitfalls of the recent
> stream of AMD bugs in RDRAND. One often used code pattern for various
> things is:
>
> do {
> val = get_random_u32();
> } while (hash_table_contains_key(val));
>
> That recent AMD bug rendered that pattern useless, whereas we're really
> very certain that chacha20 output will give pretty distributed numbers,
> no matter what.
>
> So, this simplification seems better both from a security perspective
> and from a performance perspective.
>
> Signed-off-by: Jason A. Donenfeld <Jason@xxxxxxxxx>
> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> ---
> drivers/char/random.c | 12 ------------
> 1 file changed, 12 deletions(-)

Looks good to me, thank for doing this:

Reviewed-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>