Re: [PATCH v2 02/03]: hwrng: create filler thread
From: H. Peter Anvin
Date: Thu Mar 27 2014 - 00:49:09 EST
On 03/26/2014 06:11 PM, Andy Lutomirski wrote:
>
> TBH I'm highly skeptical of this kind of entropy estimation.
> /dev/random is IMO just silly, since you need to have very
> conservative entropy estimates for the concept to really work, and
> that ends up being hideously slow.
In the absence of a hardware entropy source, it is, but for long-lived
keys, delay is better than bad key generation.
A major reason for entropy estimation is to control the amount of
backpressure. If you don't have backpressure, you only have generation
pressure, and you can't put your system to sleep when the hwrng keeps
outputting data. Worse, if your entropy source is inexhaustible, you
might end up spending all your CPU time processing its output.
> Also, in the /dev/random sense,
> most hardware RNGs have no entropy at all, since they're likely to be
> FIPS-approved DRBGs that don't have a real non-deterministic source.
Such a device has no business being a Linux hwrng device. We already
have a PRNG (DRBG) in the kernel, the *only* purpose for a hwrng device
is to be an entropy source.
> For the kernel's RNG to be secure, I think it should have the property
> that it still works if you rescale all the entropy estimates by any
> constant that's decently close to 1.
That is correct.
> If entropy estimates are systematically too low, then a naive
> implementation results in an excessively long window during early
> bootup in which /dev/urandom is completely insecure.
Eh? What mechanism would make /dev/urandom any less secure due to
entropy underestimation? The whole *point* is that we should
systematically underestimate entropy -- and we do, according to research
papers which have analyzed the state of things we do by orders of
magnitude, which is the only possible way to do it for non-hwrng sources.
> If entropy estimates are systematically too high, then a naive
> implementation fails to do a catastrophic reseed, and the RNG can be
> brute-forced.
This again is unacceptable. We really should not overestimate.
> So I think that the core code should do something along the lines of
> using progressively larger reseeds. Since I think that /dev/random is
> silly, this means that we only really care about the extent to which
> "entropy" measures entropy conditioned on whatever an attacker can
> actually compute. Since this could vary widely between devices (e.g.
> if your TPM is malicious), I think that the best we can do is to
> collect ~256 bits from everything available, shove it all in to the
> core together, and repeat. For all I know, the core code already does
> this.
>
> The upshot is that the actual rescaling factor should barely matter.
> 50% is probably fine. So is 100% and 25%. 10% is probably asking for
> trouble during early boot if all you have is a TPM.
I don't see why small factors should be a problem at all (except that it
discourages /dev/random usage.) Keep in mind we still add the entropy
-- we just don't credit its existence.
TPMs, in particular, should almost certainly be massively derated based
on what little we know about TPM.
As a concrete example: RDRAND is a hardware entropy source that is
architecturally allowed to be diluted by a DRBG up to 512 times. As far
as I know of the hardware, no shipping piece of hardware is anywhere
near 512 in this aspect. rngd currently does 512:1 data reduction, but
injecting the raw output at 1/512 credit ought to give a much better
result in terms of entropy.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/