Re: [PATCH 2/2] x86/random: Issue a warning if RDRAND or RDSEED fails

From: Dr. Greg
Date: Mon Feb 05 2024 - 20:18:24 EST


On Wed, Jan 31, 2024 at 08:16:56AM +0000, Reshetova, Elena wrote:

Good evening, I hope the week has started well for everyone.

> > On Tue, Jan 30, 2024 at 06:49:15PM +0100, Jason A. Donenfeld wrote:
> > > On Tue, Jan 30, 2024 at 6:32???PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
> > > >
> > > > On 1/30/24 05:45, Reshetova, Elena wrote:
> > > > >> You're the Intel employee so you can find out about this with much
> > > > >> more assurance than me, but I understand the sentence above to be _way
> > > > >> more_ true for RDRAND than for RDSEED. If your informed opinion is,
> > > > >> "RDRAND failing can only be due to totally broken hardware"
> > > > > No, this is not the case per Intel SDM. I think we can live under a simple
> > > > > assumption that both of these instructions can fail not just due to broken
> > > > > HW, but also due to enough pressure put into the whole DRBG construction
> > > > > that supplies random numbers via RDRAND/RDSEED.
> > > >
> > > > I don't think the SDM is the right thing to look at for guidance here.
> > > >
> > > > Despite the SDM allowing it, we (software) need RDRAND/RDSEED failures
> > > > to be exceedingly rare by design. If they're not, we're going to get
> > > > our trusty torches and pitchforks and go after the folks who built the
> > > > broken hardware.
> > > >
> > > > Repeat after me:
> > > >
> > > > Regular RDRAND/RDSEED failures only occur on broken hardware
> > > >
> > > > If it's nice hardware that's gone bad, then we WARN() and try to make
> > > > the best of it. If it turns out that WARN() was because of a broken
> > > > hardware _design_ then we go sharpen the pitchforks.
> > > >
> > > > Anybody disagree?
> > >
> > > Yes, I disagree. I made a trivial test that shows RDSEED breaks easily
> > > in a busy loop. So at the very least, your statement holds true only
> > > for RDRAND.
> > >
> > > But, anyway, if the statement "RDRAND failures only occur on broken
> > > hardware" is true, then a WARN() in the failure path there presents no
> > > DoS potential of any kind, and so that's a straightforward conclusion
> > > to this discussion. However, that really hinges on "RDRAND failures
> > > only occur on broken hardware" being a true statement.
> >
> > There's a useful comment here from an Intel engineer
> >
> > https://web.archive.org/web/20190219074642/https://software.intel.com/en-
> > us/blogs/2012/11/17/the-difference-between-rdrand-and-rdseed
> >
> > "RDRAND is, indeed, faster than RDSEED because it comes
> > from a hardware-based pseudorandom number generator.
> > One seed value (effectively, the output from one RDSEED
> > command) can provide up to 511 128-bit random values
> > before forcing a reseed"
> >
> > We know we can exhaust RDSEED directly pretty trivially. Making your
> > test program run in parallel across 20 cpus, I got a mere 3% success
> > rate from RDSEED.
> >
> > If RDRAND is reseeding every 511 values, RDRAND output would have
> > to be consumed significantly faster than RDSEED in order that the
> > reseed will happen frequently enough to exhaust the seeds.
> >
> > This looks pretty hard, but maybe with a large enough CPU count
> > this will be possible in extreme load ?
> >
> > So I'm not convinced we can blindly wave away RDRAND failures as
> > guaranteed to mean broken hardware.

> This matches both my understanding (I do have cryptography
> background and understanding how cryptographic RNGs work) and
> official public docs that Intel published on this matter. Given
> that the physical entropy source is limited anyhow, and by giving
> enough pressure on the whole construction you should be able to make
> RDRAND fail because if the intermediate AES-CBC MAC extractor/
> conditioner is not getting its min entropy input rate, it wont
> produce a proper seed for AES CTR DRBG. Of course exact
> details/numbers can wary between different generations of Intel DRNG
> implementation, and the platforms where it is running on, so be
> careful to sticking to concrete numbers.

In the spirit of that philosophy we proffer the response below.

> That said, I have taken an AR to follow up internally on what can be
> done to improve our situation with RDRAND/RDSEED. But I would still
> like to finish the discussion on what people think should be done in
> the meanwhile keeping in mind that the problem is not intel
> specific, despite us intel people bringing it for public discussion
> first. The old saying is still here: "Please don't shoot the
> messenger" )) We are actually trying to be open about these things
> and create a public discussion.

Actually, I now believe there is clear evidence that the problem is
indeed Intel specific. In light of our testing, it will be
interesting to see what your 'AR' returns with respect to an official
response from Intel engineering on this issue.

One of the very bright young engineers collaborating on Quixote, who
has been following this conversation, took it upon himself to do some
very methodical engineering analysis on this issue. I'm the messenger
but this is very much his work product.

Executive summary is as follows:

- No RDRAND depletion failures were observable with either the Intel
or AMD hardware that was load tested.

- RDSEED depletion is an Intel specific issue, AMD's RDSEED
implementation could not be provoked into failure.

- AMD's RDRAND/RDSEED implementation is significantly slower than
Intel's.

Here are the engineer's lab notes verbatim:

---------------------------------------------------------------------------
I tested both the single-threaded and OMP-multithreaded versions of
the RDSEED/RDRAND depletion loop on each of the machines below.

AMD: 2X AMD EPYC 7713 (Milan) 64-Core Processor @ 2.0 GHz, 128
physical cores total

Intel: 2X Intel Xeon Gold 6140 (Skylake) 18-Core Processor @ 2.3 GHz,
36 physical cores total

Single-threaded results:

Test case: 1,000,000 iterations each for RDRAND and RDSEED, n=100
tests, single-threaded.

AMD: 100% success rate for both RDRAND and RDSEED for all tests,
runtime 0.909-1.055s (min-max).

Intel: 100% success rate for RDRAND for all tests, 20.01-20.12%
(min-max) success rate for RSEED, runtime 0.256-0.281s (min-max)

OMP multithreaded results:

Test case: 1,000,000 iterations per thread, for both RDRAND and
RDSEED, n=100 tests, OMP multithreaded with OMP_NUM_THREADS=<total
physical cores> (i.e. 128 for AMD and 36 for Intel)

AMD: 100% success rate for both RDRAND and RDSEED for all tests,
runtime 47.229-47.603s (min-max).

Intel: 100% success rate for RDRAND for all tests, 1.77-5.62%
(min-max) success rate for RSEED, runtime 0.562-0.595s (min-max)

CONCLUSION

RDSEED failure was reproducibly induced on the Intel Skylake platform,
for both single- and multithreaded tests, whereas RDSEED failure could
not be induced on the AMD platform for either test. RDRAND did not
fail on either platform for either test.

AMD execution time was roughly 4x slower than Intel (1s vs 0.25s) for
the single-threaded test, and almost 100x slower than Intel (47s vs
0.5s) for the multithreaded test. The difference in clock rates (2.0
GHz for AMD vs 2.3 GHz for Intel) is not sufficient to explain these
runtime differences. So it seems likely that AMD is gating the rate at
which a new RDSEED value can be requested.
---------------------------------------------------------------------------

Speaking now with my voice:

Unless additional information shows up, despite our collective
handwringing, as long as the RDRAND instruction is used as the
cryptographic primitive, there appears to be little likelihood of a
DOS randomness attack against a TDX based CoCo virtual machine.

While it is highly unlikely we will ever get an 'official' readout on
this issue, I suspect there is a high probability that Intel
engineering favored performance with their RDSEED/RDRAND
implementation.

AMD 'appears', and without engineering feedback from AMD I would
emphasize the notion of 'appears', to have embraced the principal of
taking steps to eliminate the possibility of a socket based adversary
attack against their RNG infrastructure.

> Elena.

Hopefully the above is useful for everyone interested in this issue.

Once again, a thank you to our version of 'Sancho' for his legwork on
this, who has also read Cervantes at length... :-)

Have a good remainder of the week.

As always,
Dr. Greg

The Quixote Project - Flailing at the Travails of Cybersecurity
https://github.com/Quixote-Project