Re: [PATCH] random: use computational hash for entropy extraction
From: Simo Sorce
Date: Wed Feb 02 2022 - 08:36:13 EST
Jason,
if the current code is mistakenly stretching the entropy, perhaps the
correct curse of action is to correct that mistake first, before
introducing a new conditioning function.
As it is, these patches cannot be say to just perform conditioning if
they are stretching the entropy, the risk is compounding errors and
voiding any reasonable analysis of the entropy carried through the RNG.
It would also be nice to have an explanation (in the patch or at least
the commit message) about how entropy is preserved and why a specific
function is cryptographically adequate. Note that there is no study
about using internal states of hash functions, it would be better to
base these decisions on solid ground by citing relevant research or
standards.
Thanks,
Simo.
On Wed, 2022-02-02 at 13:23 +0100, Jason A. Donenfeld wrote:
> Hi Stephan,
>
> It's like this for a few reasons:
>
> - Primarily, we want to feed 32 bytes back in after finalization (in
> this case as a PRF key), just as the code does before this patch, and
> return 32 bytes to the caller, and we don't want those to be relatable
> to each other after the seed is erased from the stack.
> - Actually, your statement isn't correct: _extract_entropy is called
> for 48 bytes at ~boot time, with the extra 16 bytes affecting the
> block and nonce positions of the chacha state. I'm not sure this is
> very sensible to do -- it really is not adding anything -- but I'd
> like to avoid changing multiple things at once, when these are better
> discussed and done separately. (I have a separate patch for something
> along those lines.)
> - Similarly, I'd like to avoid changing the general idea of what
> _extract_entropy does (the underscore version has never accounted for
> entropy counts), deferring anything like that, should it become
> necessary, to an additional patch, where again it can be discussed
> separately.
> - By deferring the RDRAND addition to the second phase, we avoid a
> potential compression call while the input pool lock is held, reducing
> our critical section.
> - HKDF-like constructions are well studied and understood in the model
> we're working in, so it forms a natural and somewhat boring fit for
> doing what we want to do.
>
> Regards,
> Jason
>
--
Simo Sorce
RHEL Crypto Team
Red Hat, Inc