Re: [PATCH v5 0/7] /dev/random - a new approach
From: Stephan Mueller
Date: Tue Jun 21 2016 - 01:18:57 EST
Am Dienstag, 21. Juni 2016, 01:12:55 schrieb Theodore Ts'o:
Hi Theodore,
> On Mon, Jun 20, 2016 at 09:00:49PM +0200, Stephan Mueller wrote:
> > The time stamp maintenance is the exact cause for the correlation: one HID
> > event triggers:
> >
> > - add_interrupt_randomness which takes high-res time stamp, Jiffies and
> > some pointers
> >
> > - add_input_randomness which takes high-res time stamp, Jiffies and HID
> > event value
> >
> > The same applies to disk events. My suggestion is to get rid of the double
> > counting of time stamps for one event.
> >
> > And I guess I do not need to stress that correlation of data that is
> > supposed to be entropic is not good :-)
>
> What is your concern, specifically? If it is in the entropy
> accounting, there is more entropy in HID event interrupts, so I don't
> think adding the extra 1/64th bit of entropy is going to be problematic.
My concern is that interrupts have *much* more entropy than 1/64th. With a
revaluation of the assumed entropy in interrupts, we will serve *all* systems
much better and not just systems with HID.
As said, I think we heavily penalize server type and VM environments against
desktop systems by crediting entropy in large scale to HID and conversely to a
much lesser degree to interrupts.
>
> If it is that there are two timestamps that are closely correleated
> being added into the pool, the add_interrupt_randomness() path is
> going to mix that timestamp with the interrupt timings from 63 other
> interrupts before it is mixed into the input pool, while the
> add_input_randomness() mixes it directly into the pool. So if you
> think there is a way this could be leveraged into attack, please give
> specifics --- but I think we're on pretty solid ground here.
I am not saying that there is an active attack vector. All I want is to
revalue the entropy in one interrupt which can only be done if we drop the HID
time stamp collection.
Ciao
Stephan