Re: [PATCH] random: fix rdrand mix-in
From: Theodore Y. Ts'o
Date: Tue Jul 17 2018 - 17:12:12 EST
On Tue, Jul 17, 2018 at 09:26:00AM -0700, Linus Torvalds wrote:
> On Tue, Jul 17, 2018 at 6:54 AM Arnd Bergmann <arnd@xxxxxxxx> wrote:
> >
> > The newly added arch_get_random_int() call was done incorrectly,
> > using the output only if rdrand hardware was /not/ available. The
> > compiler points out that the data is uninitialized in this case:
Yeah, oops. I had sent it for review to linux-crypto two days ago,
and no one had caught it there --- so thanks so much for catching it,
Arnd! I'm going to fold this into the existing patch so it's easier
to get this sent to stable.
> > for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
> > - if (arch_get_random_int(&t))
> > + if (!arch_get_random_int(&t))
> > continue;
> > buf[i] ^= t;
> > }
>
> Why not just make that "continue" be a "break"? If you fail once, you
> will fail the next time too (whether the arch just doesn't support it
> at all, or whether the HW entropy is just temporarily exhausted).
I wasn't sure how quickly the HW entropy would replenish itself; I
know that on first RDRAND platforms it would effectively never fail
(as in if six of the eight cores were calling RDRAND in a tight loop
_maybe_ you could exhaust the HW entropy). But on more modern systems
with a huge number of cores (say, a 96 core Xeon) HW entropy running
out was much more of a thing. My impression was it could replenish
itself fairly quickly, so my thinking was continue was better than
break.
The other thing that was a factor in my thinking was this was getting
called from process context, and the process would be burning CPU time
running "Jitterentropy", so it didn't seem like we would be wasting
*that* much CPU time.
It's big deal either way, so I can make it be a break if you think
that's better.
- Ted