Re: [PATCH 13/14] x86: BHI stubs
From: Andrew Cooper
Date: Mon Oct 14 2024 - 17:54:17 EST
On 14/10/2024 6:50 pm, Constable, Scott D wrote:
> Hello Andrew,
>
> Your observation is valid. If we assume that the hashing function used by FineIBT is uniformly distributed, then the distribution of hashes at the call site and at the call target is [0,2^32-1]. The difference of the two hashes computed in R10 will have the same distribution because of wrap-around, and the mean of this distribution is 2^31-1. Therefore, to reasonably bypass the proposed mitigation, I believe an attacker would need the hardened pointer to be added/subtracted to/from an attacker-controlled 64-bit value, or an attacker-controlled 32-bit value scaled by 2, 4, or 8. Therefore, I think it would be reasonable to additionally apply the CMOV hardening to any 32-/64-bit integral parameters, including enums. I scanned the kernel (Ubuntu noble 6.8 config) and found that 77% of parameters to indirect call targets are pointers (which we already harden) and less than 20% are 32-/64-bit integrals and enums.
>
> I think that this proposal would also address some other potential corner cases, such as:
> - an attacker-controlled 32-/64-bit attacker-controlled integral parameter is used to index into a fixed-address array
> - an attacker-controlled 64-bit attacker-controlled integral parameter is cast into a pointer
>
> Does this proposal address your concern?
Hello,
Thankyou for the analysis, and I'm glad I'm not just clutching at straws.
However, I'm not sure if extending this to other cases works very well.
While the second point is probably easy for the compiler to figure out,
the former is looking a little bit more like a halting problem.
One key aspect is "how far can speculation continue beyond a
mispredicted Jcc", but it occurs to me since the last email that there
is no answer that Intel will give here. It is uarch dependent and
expected to increase on future parts, so safety wise we must assume
infinite.
And infinite is no good, so we must reason about "good enough".
My gut feeling is that blindly using the residual from the hash check
isn't good enough. 7 years of speculation fixes have shown that the
researchers are constantly proving "this will be good enough" wrong.
So, instead of simply using the residual, why don't we explicitly set
%r10 to a known value?
Because we need to preserve flags from original hash check, we can't use
any of the simple zeroing idioms, but we could use MOV $0, %r10 before
the CMOVs targetting the pointer parameters.
But, if we're using a long-ish encoding anyway, why not MOV $GB(2)-1, %r10 ?
This way, in the bad speculation path we'll set all pointers to 2G,
which removes most of the risk with backwards references, and makes the
behaviour invariant of the hash residual (which itself reduces the
opportunities to leak the hash value).
So I suppose the real question is whether one extra MOV is acceptable,
and is it good enough? My gut feeling is yes to both.
To the extra cases, they can of course be added if the compiler support
isn't too horrible, independently of the extra MOV. But, if 77% of
parameters to indirect functions are pointers anyway, isn't it work
considering CMOV-ing all parameter registers and turning the 57 stubs
into just 6, and improve I$ locality?
~Andrew