Re: [PATCH] random: make try_to_generate_entropy() more robust

From: Jörn Engel
Date: Sat Oct 19 2019 - 10:37:39 EST


On Sat, Oct 19, 2019 at 12:49:52PM +0200, Thomas Gleixner wrote:
>
> One slightly related thing I was looking into is that the mixing of
> interrupt entropy is always done from hard interrupt context. That has a
> few issues:
>
> 1) It's pretty visible in profiles for high frequency interrupt
> scenarios.
>
> 2) The regs content can be pretty boring non-deterministic when the
> interrupt hits idle.
>
> Not an issue in the try_to_generate_entropy() case probably, but
> that still needs some careful investigation.
>
> For #1 I was looking into a trivial storage model with a per cpu ring
> buffer, where each entry contains the entropy data of one interrupt and let
> some thread or whatever handle the mixing later.

Or you can sum up all regs.

unsigned long regsum(struct pt_regs *regs)
{
unsigned long *r = (void *)regs;
unsigned long sum = r[0];
int i;

for (i = 1; i < sizeof(*regs) / sizeof(*r); i++) {
sum += r[i];
}
return sum;
}

Takes 1 cycle per register in the current form, half as much if the
compiler can be convinced to unroll the loop. That's cheaper than
rdtsc() on most/all CPUs.

If interrupt volume is high, the regsum should be good enough. The
final mixing can be amortized as well. Once the pool is initialized,
you can mix new entropy once per jiffy or so and otherwise just add to a
percpu counter or something like that.

> That would allow to filter out 'constant' data (#) but it would also give
> Joerns approach a way to get to some 'random' register content independent
> of the context in which the timer softirq is running in.

Jörn

--
Given two functions foo_safe() and foo_fast(), the shorthand foo()
should be an alias for foo_safe(), never foo_fast().
-- me