RE: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall
From: Reshetova, Elena
Date: Wed May 29 2019 - 06:17:16 EST
> I confess I've kind of lost the plot on the performance requirements
> at this point. Instead of measuring and evaluating potential
> solutions, can we try to approach this from the opposite direction and
> ask what the requirements are?
> What's the maximum number of CPU cycles that we are allowed to burn
> such that we can meet the 1-2% overhead?
This is a good question on which I have no answer, so I tried to play with
performance numbers as much as I know. I don't know what is
considered acceptable for a syscall path and I guess answer can also
be presented in two security configurations, like weak (less CPU cycles)
and strong (more cycles).
> And how many bits of uncertainty are we trying to present to the
I think currently we are talking about 5 bits. Ideally maybe 8 bits,
given that offset is new for each syscall, it should be enough to remove
the "stability and predictability" feature of kernel thread stack that
attackers rely on. If your chances of crashing kernel 255 from 256,
you might figure some other attack path instead.
What's the minimum beyond we shouldn't bother? (Perhaps
> because rdtsc will give us that many bits?)
Again, it is all about probabilities. 4 bits gives us success chance of
1 in 16 (of not crashing kernel), this is already starting to be dangerous
in my view, so maybe no less than 5?
And does that change if
> we vary the reseed window in terms of the number of system calls
> between reseeding?
I think if we talk about prng, reseeding should be done periodically to
make sure we never overrun the safe generation period and also for
additional security (seeding with proper entropy).
But the most important part is to have a distinct offset (random bits) between
each consequent syscall.
> And what are the ideal parameters after which point we're just gilding
> the lily?
Not sure about ideal params for the whole combination here since security
and performance are basically conflicting with each other (as usual).
So, that's why I was trying to propose to have two version of this:
- one with tilt towards performance (rdtsc based)
- one with tilt towards security (CRNG-based)
And then let users choose what matters more for their workload.
For normal things like dektops, etc. CRNG based version won't provide
any noticeable overhead. It might only matter for syscall sensitive workloads,
which btw, most likely not enable quite a bunch of other security protections,
so I would say that for them to have even rdtsc() version is actually an
improvement in their defenses for stack (and basically no impact on performance).
On related note: the current prng we have in kernel (prandom) is based on a
*very old* style of prngs, which is basically 4 linear LFSRs xored together.
Nowadays, we have much more powerful prngs that show much better
statistical and even security properties (not cryptographically secure, but still
not so linear like the one above).
What is the reason why we still use a prng that is couple of decades away from the
state of art in the area? It is actively used, especially by network stack,
should we update it to smth that is more appropriate (speed would be comparable)?
I am mostly talking about PCG-based generators:
If people are interested, I could put together a PoC and we have an expert here we can
consult for providing calculations for min-entropy, HILL entropy and whatever