RE: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall

From: Reshetova, Elena
Date: Thu May 02 2019 - 04:16:25 EST


From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > +unsigned char random_get_byte(void)
> > +{
> > + struct rnd_buffer *buffer = &get_cpu_var(stack_rand_offset);
> > + unsigned char res;
> > +
> > + if (buffer->byte_counter >= RANDOM_BUFFER_SIZE) {
> > + get_random_bytes(&(buffer->buffer), sizeof(buffer->buffer));
> > + buffer->byte_counter = 0;
> > + }
> > +
> > + res = buffer->buffer[buffer->byte_counter];
> > + buffer->buffer[buffer->byte_counter] = 0;
>
> If is really worth dirtying a cache line to zero data we've used?
> The unused bytes following are much more interesting.
>
> Actually if you got 'byte_counter' into a completely different
> area of memory (in data that is changed more often to avoid
> dirtying an extra cache line) then not zeroing the used data
> would make it harder to determine which byte will be used next.

Interesting idea, but what would this area be?
I am not that familiar with different data usage patterns.

>
> I'm also guessing that get_cpu_var() disables pre-emption?

Yes, in my understanding:

#define get_cpu_var(var) \
(*({ \
preempt_disable(); \
this_cpu_ptr(&var); \
}))

> This code could probably run 'fast and loose' and just ignore
> the fact that pre-emption would have odd effects.
> All it would do is perturb the randomness!

Hm.. I see your point, but I am wondering what the odd effects might
be.. i.e. can we end up using the same random bits twice for two or more
different syscalls and attackers can try to trigger this situation?

Best Regards,
Elena.