Re: [RFC] Disable lockref on arm64

From: Kees Cook
Date: Sat Jun 15 2019 - 00:26:58 EST


tl;dr: if arm/arm64 can catch overflow, untested dec-to-zero, and
inc-from-zero, while performing better than existing REFCOUNT_FULL,
it's a no-brainer to switch. Minimum parity to x86 would be to catch
overflow and untested dec-to-zero. Minimum viable protection would be to
catch overflow. LKDTM is your friend.

Details below...

On Fri, Jun 14, 2019 at 11:38:50AM +0100, Will Deacon wrote:
> On Fri, Jun 14, 2019 at 12:24:54PM +0200, Ard Biesheuvel wrote:
> > On Fri, 14 Jun 2019 at 11:58, Will Deacon <will.deacon@xxxxxxx> wrote:
> > > On Fri, Jun 14, 2019 at 07:09:26AM +0000, Jayachandran Chandrasekharan Nair wrote:
> > > > x86 added a arch-specific fast refcount implementation - and the commit
> > > > specifically notes that it is faster than cmpxchg based code[1].
> > > >
> > > > There seems to be an ongoing effort to move over more and more subsystems
> > > > from atomic_t to refcount_t(e.g.[2]), specifically because refcount_t on
> > > > x86 is fast enough and you get some error checking atomic_t that does not
> > > > have.

For clarity: the choices on x86 are: full or fast, where both catch
the condition that leads to use-after-free that can be unconditionally
mitigated (i.e. refcount overflow-wrapping to zero: the common missing
ref count decrement). The _underflow_ case (the less common missing ref
count increment) can be exploited but nothing can be done to mitigate
it. Only a later increment from zero can indicate that something went
wrong _in the past_.

There is not a way to build x86 without the overflow protection, and
that was matched on arm/arm64 by making REFCOUNT_FULL unconditionally
enabled. So, from the perspective of my take on weakening the protection
level, I'm totally fine if arm/arm64 falls back to a non-FULL
implementation as long as it catches the overflow case (which the prior
"fast" patches totally did).

> > > Correct, but there are also some cases that are only caught by
> > > REFCOUNT_FULL.
> > >
> > Yes, but do note that my arm64 implementation catches
> > increment-from-zero as well.

FWIW, the vast majority of bugs that refcount_t has found has been
inc-from-zero (the overflow case doesn't tend to ever get exercised,
but it's easy for syzkaller and other fuzzers to underflow when such a
path is found). And those are only found on REFCOUNT_FULL kernels
presently, so it'd be nice to have that case covered in the "fast"
arm/arm64 case too.

> Ok, so it's just the silly racy cases that are problematic?
>
> > > > Do you think Ard's patch needs changes before it can be considered? I
> > > > can take a look at that.
> > >
> > > I would like to see how it performs if we keep the checking inline, yes.
> > > I suspect Ard could spin this in short order.
> >
> > Moving the post checks before the stores you mean? That shouldn't be
> > too difficult, I suppose, but it will certainly cost performance.
>
> That's what I'd like to assess, since the major complaint seems to be the
> use of cmpxchg() as opposed to inline branching.
>
> > > > > Whatever we do, I prefer to keep REFCOUNT_FULL the default option for arm64,
> > > > > so if we can't keep the semantics when we remove the cmpxchg, you'll need to
> > > > > opt into this at config time.
> > > >
> > > > Only arm64 and arm selects REFCOUNT_FULL in the default config. So please
> > > > reconsider this! This is going to slow down arm64 vs. other archs and it
> > > > will become worse when more code adopts refcount_t.
> > >
> > > Maybe, but faced with the choice between your micro-benchmark results and
> > > security-by-default for people using the arm64 Linux kernel, I really think
> > > that's a no-brainer. I'm well aware that not everybody agrees with me on
> > > that.
> >
> > I think the question whether the benchmark is valid is justified, but
> > otoh, we are obsessed with hackbench which is not that representative
> > of a real workload either. It would be better to discuss these changes
> > in the context of known real-world use cases where refcounts are a
> > true bottleneck.
>
> I wasn't calling into question the validity of the benchmark (I really have
> no clue about that), but rather that you can't have your cake and eat it.
> Faced with the choice, I'd err on the security side because it's far easier
> to explain to somebody that the default is full mitigation at a cost than it
> is to explain why a partial mitigation is acceptable (and in the end it's
> often subjective because people have different thresholds).

I'm happy to call into question the validity of the benchmark though! ;)
Seriously, it came up repeatedly in the x86 port, where there was a
claim of "it's slower" (which is certainly objectively true: more cycles
are spent), but no one could present a real-world workload where the
difference was measurable.

> > Also, I'd like to have Kees's view on the gap between REFCOUNT_FULL
> > and the fast version on arm64. I'm not convinced the cases we are not
> > covering are such a big deal.
>
> Fair enough, but if the conclusion is that it's not a big deal then we
> should just remove REFCOUNT_FULL altogether, because it's the choice that
> is the problem here.

The coverage difference on x86 is that inc-from-zero is only caught in
the FULL case. Additionally there is the internal difference around how
"saturation" of the value happens. e.g. under FULL a count gets pinned
either to INT_MAX or to zero.

Since the "fast" arm patch caught inc-from-zero, I would say sure
ditch FULL in favor of it (though check that "dec-to-zero" is caught:
i.e. _dec() hitting zero -- instead of dec_and_test() hitting zero). LKDTM
has extensive behavioral tests for refcount_t, so if the tests show the
same results before/after, go for it. :) Though note that the logic may
need tweaking depending on the saturation behavior: right now it expects
either FULL (INT_MAX/0 pinning) or the x86 saturation (INT_MIN / 2).

Note also that LKDTM has a refcount benchmark as well, in case you want
to measure the difference between atomic_t and refcount_t in the most
microbenchmark-y way possible. This is what was used for the numbers in
commit 7a46ec0e2f48 ("locking/refcounts, x86/asm: Implement fast
refcount overflow protection"):

2147483646 refcount_inc()s and 2147483647 refcount_dec_and_test()s:
cycles protections
atomic_t 82249267387 none
refcount_t-fast 82211446892 overflow, untested dec-to-zero
refcount_t-full 144814735193 overflow, untested dec-to-zero, inc-from-zero

Also note that the x86 fast implementations adjusted memory ordering
slightly later on in commit 47b8f3ab9c49 ("refcount_t: Add ACQUIRE
ordering on success for dec(sub)_and_test() variants").

--
Kees Cook