Re: [PATCH RFC tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context
From: Peter Zijlstra
Date: Tue Jun 06 2017 - 12:12:46 EST
On Tue, Jun 06, 2017 at 04:45:57PM +0200, Christian Borntraeger wrote:
> A the same time, the implicit memory barrier of the atomic_inc should be
> even cheaper. In contrast to x86, a full smp_mb seems to be almost for
> free (looks like <= 1 cycle for a bcr 14,0 and no contention). So I
> _think_ that this should be really fast enough.
So there is a patch out there that changes the x86 smp_mb()
implementation to do "LOCK ADD some_stack_location, 0" which is lots
cheaper than the "MFENCE" instruction and provides similar guarantees.
HPA was running that through some of the architects.. ping?
(Also, I can imagine OoO CPUs collapsing back-to-back ordering stuff,
but what do I know).
> As a side note, I am asking myself, though, why we do need the
> preempt_disable/enable for the cases where we use the opcodes
> like lao (atomic load and or to a memory location) and friends.
I suspect the real reason is CPU hotplug, because regular preemption
should not matter. It would be the same as getting migrated the moment
_after_ you do the $op.
But preempt_disable() also holds off hotplug and thereby serializes
against hotplug notifiers that want to, for instance, move the value of
the per-cpu variable to a still online CPU. Without this serialization
it would be possible for the $op to happen _after_ the hotplug notifier
runs.