Re: [RFC][PATCH 1/5] mm: Rework {set,clear,mm}_tlb_flush_pending()
From: Will Deacon
Date: Wed Aug 02 2017 - 04:43:57 EST
On Wed, Aug 02, 2017 at 09:15:23AM +0100, Will Deacon wrote:
> On Wed, Aug 02, 2017 at 10:11:06AM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 02, 2017 at 11:23:12AM +1000, Benjamin Herrenschmidt wrote:
> > > On Wed, 2017-08-02 at 00:59 +0200, Peter Zijlstra wrote:
> > > > > PowerPC for example uses PTESYNC before the TBLIE, so does a SYNC after
> > > > > work? Ben?
> > > > > From what I gather it is not. You have TLBSYNC for it. So the good news
> > >
> > > tlbsync is pretty much a nop these days. ptesync is a strict superset
> > > of sync and we have it after every tlbie.
> >
> > In the radix code, yes. I got lost going through the hash code, and I
> > didn't look at the 32bit code at all.
> >
> > So the radix code does:
> >
> > PTESYNC
> > TLBIE
> > EIEIO; TLBSYNC; PTESYNC
> >
> > which should be completely ordered against anything prior and anything
> > following, and is I think the behaviour we want from TLB flushes in
> > general, but is very much not provided by a number of architectures
> > afaict.
> >
> > Ah, found the hash-64 code, yes that's good too. The hash32 code lives
> > in asm and confuses me, it has a bunch of SYNC, SYNC_601 and isync in.
> > The nohash variant seems to do a isync after tlbwe, but again no clue.
> >
> >
> > Now, do I go and attempt fixing all that needs fixing?
> >
> >
> > x86 is good, our CR3 writes or INVLPG stuff is fully serializing.
> >
> > arm is good, it does DSB ISH before and after
> >
> > arm64 looks good too, although it plays silly games with the first
> > barrier, but I trust that to be sufficient.
>
> The first barrier only orders prior stores for us, because page table
> updates are made using stores. A prior load could be reordered past the
> invalidation, but can't make it past the second barrier.
>
> I really think we should avoid defining TLB invalidation in terms of
> smp_mb() because it's a lot more subtle than that.
Another worry I have here is with architectures that can optimise the
"only need to flush the local TLB" case. For example, this version of 'R':
P0:
WRITE_ONCE(x, 1);
smp_mb();
WRITE_ONCE(y, 1);
P1:
WRITE_ONCE(y, 2);
flush_tlb_range(...); // Only needs to flush the local TLB
r0 = READ_ONCE(x);
It doesn't seem unreasonable to me for y==2 && r0==0 if the
flush_tlb_range(...) ends up only doing local invalidation. As a concrete
example, imagine a CPU with a page table walker that can snoop the local
store-buffer. Then, the local flush_tlb_range in P1 only needs to progress
the write to y as far as the store-buffer before it can invalidate the local
TLB. Once the TLB is invalidated, it can read x knowing that the translation
is up-to-date wrt the page table, but that read doesn't need to wait for
write to y to become visible to other CPUs.
So flush_tlb_range is actually weaker than smp_mb in some respects, yet the
flush_tlb_pending stuff will still work correctly.
Will