Re: [RFC] Disable lockref on arm64

From: Jayachandran Chandrasekharan Nair
Date: Fri Jun 14 2019 - 03:14:26 EST


On Wed, Jun 12, 2019 at 10:31:53AM +0100, Will Deacon wrote:
> Hi JC,
>
> On Wed, Jun 12, 2019 at 04:10:20AM +0000, Jayachandran Chandrasekharan Nair wrote:
> > On Wed, May 22, 2019 at 05:04:17PM +0100, Will Deacon wrote:
> > > On Sat, May 18, 2019 at 12:00:34PM +0200, Ard Biesheuvel wrote:
> > > > On Sat, 18 May 2019 at 06:25, Jayachandran Chandrasekharan Nair
> > > > <jnair@xxxxxxxxxxx> wrote:
> > > > > Looking thru the perf output of this case (open/close of a file from
> > > > > multiple CPUs), I see that refcount is a significant factor in most
> > > > > kernel configurations - and that too uses cmpxchg (without yield).
> > > > > x86 has an optimized inline version of refcount that helps
> > > > > significantly. Do you think this is worth looking at for arm64?
> > > > >
> > > >
> > > > I looked into this a while ago [0], but at the time, we decided to
> > > > stick with the generic implementation until we encountered a use case
> > > > that benefits from it. Worth a try, I suppose ...
> > > >
> > > > [0] https://lore.kernel.org/linux-arm-kernel/20170903101622.12093-1-ard.biesheuvel@xxxxxxxxxx/
> > >
> > > If JC can show that we benefit from this, it would be interesting to see if
> > > we can implement the refcount-full saturating arithmetic using the
> > > LDMIN/LDMAX instructions instead of the current cmpxchg() loops.
> >
> > Now that the lockref change is mainline, I think we need to take another
> > look at this patch.
>
> Before we get too involved with this, I really don't want to start a trend of
> "let's try to rewrite all code using cmpxchg() in Linux because of TX2".

x86 added a arch-specific fast refcount implementation - and the commit
specifically notes that it is faster than cmpxchg based code[1].

There seems to be an ongoing effort to move over more and more subsystems
from atomic_t to refcount_t(e.g.[2]), specifically because refcount_t on
x86 is fast enough and you get some error checking atomic_t that does not
have.

> At some point, the hardware needs to play ball. However...

Even on a totally baller CPU, REFCOUNT_FULL is going to be slow :)
On TX2, this specific benchmark just highlights the issue, but the
difference is significant even on x86 (as noted above).

> Ard's refcount patch was about moving the overflow check out-of-line. A
> side-effect of this, is that we avoid the cmpxchg() operation from many of
> the operations (atomic_add_unless() disappears), and it's /this/ which helps
> you. So there may well be a middle ground where we avoid the complexity of
> the out-of-line {over,under}flow handling but do the saturation post-atomic
> inline.

Right.

> I was hoping we could use LDMIN/LDMAX to maintain the semantics of
> REFCOUNT_FULL, but now that I think about it I can't see how we could keep
> the arithmetic atomic in that case. Hmm.

Do you think Ard's patch needs changes before it can be considered? I
can take a look at that.

> Whatever we do, I prefer to keep REFCOUNT_FULL the default option for arm64,
> so if we can't keep the semantics when we remove the cmpxchg, you'll need to
> opt into this at config time.

Only arm64 and arm selects REFCOUNT_FULL in the default config. So please
reconsider this! This is going to slow down arm64 vs. other archs and it
will become worse when more code adopts refcount_t.

JC
[1] https://www.mail-archive.com/linux-kernel@xxxxxxxxxxxxxxx/msg1451350.html
[2] https://www.mail-archive.com/linux-kernel@xxxxxxxxxxxxxxx/msg1336955.html