Re: [PATCH 2/2] x86/random: Issue a warning if RDRAND or RDSEED fails
From: Dave Hansen
Date: Fri Feb 09 2024 - 15:37:51 EST
On 2/9/24 11:49, Jason A. Donenfeld wrote:
> [As an aside, I would like to note that a different construction of
> RDRAND could keep outputting good random numbers for a reeeeeallly
> long time without needing to reseed, or without penalty if RDSEED is
> depleted, and so could be made to actually never fail. But given the
> design goals of RDRAND, this kind of crypto is highly likely to never
> be implemented, so I'm not even moving to suggest that AMD/Intel just
> 'fix' the crypto design goals of the instruction. It's not gonna
> happen for lots of reasons.]
Intel's RDRAND reseeding behavior is spelled out here:
> https://www.intel.com/content/www/us/en/developer/articles/guide/intel-digital-random-number-generator-drng-software-implementation-guide.html
In the "Guaranteeing DBRG Reseeding" section.
> It's a bit of a scheduling/queueing thing, where different security
> contexts shouldn't be able to starve others out of the finite resource
> indefinitely.
>
> What I'm wondering is if that kind of fairness is even possible to
> achieve in the hardware or the microcode.
..
Even ignoring different security contexts, Intel's whitepaper claims
that no starvation happens with RDRAND:
> If multiple threads are invoking RDRAND simultaneously, total RDRAND
> throughput (across all threads) scales approximately linearly with
> the number of threads until no more hardware threads remain, the bus
> limits of the processor are reached, or the DRNG interface is fully
> saturated. Past this point, the maximum throughput is divided equally
> among the active threads. No threads get starved.
800 MB/sec of total RDRAND throughput across all threads, guaranteed
reseeding, and no starvation sounds pretty good to me.
Does that need improving?