Re: [RFC PATCH] percpu system call: fast userspace percpu critical sections

From: Andy Lutomirski
Date: Tue May 26 2015 - 15:57:33 EST


On May 25, 2015 11:54 AM, "Andy Lutomirski" <luto@xxxxxxxxxxxxxx> wrote:
>
> [cc: hpa, Borislav and Andi]
>
> On Mon, May 25, 2015 at 11:30 AM, Mathieu Desnoyers
> <mathieu.desnoyers@xxxxxxxxxxxx> wrote:
> > ----- Original Message -----
> >> On May 23, 2015 10:09 AM, "Mathieu Desnoyers"
> >> <mathieu.desnoyers@xxxxxxxxxxxx> wrote:
> >> >
> >> > ----- Original Message -----
> >> > > On Fri, May 22, 2015 at 2:34 PM, Mathieu Desnoyers
> >> > > <mathieu.desnoyers@xxxxxxxxxxxx> wrote:
> >> > > > ----- Original Message -----
> >> > > >> On Fri, May 22, 2015 at 1:26 PM, Michael Kerrisk
> >> > > >> <mtk.manpages@xxxxxxxxx>
> >> > > >> wrote:
> >> > > >> > [CC += linux-api@]
> >> > > >> >
> >> > > >> > On Thu, May 21, 2015 at 4:44 PM, Mathieu Desnoyers
> >> > > >> > <mathieu.desnoyers@xxxxxxxxxxxx> wrote:
> >> > > >> >> Expose a new system call allowing userspace threads to register
> >> > > >> >> a TLS area used as an ABI between the kernel and userspace to
> >> > > >> >> share information required to create efficient per-cpu critical
> >> > > >> >> sections in user-space.
> >> > > >> >>
> >> > > >> >> This ABI consists of a thread-local structure containing:
> >> > > >> >>
> >> > > >> >> - a nesting count surrounding the critical section,
> >> > > >> >> - a signal number to be sent to the thread when preempting a thread
> >> > > >> >> with non-zero nesting count,
> >> > > >> >> - a flag indicating whether the signal has been sent within the
> >> > > >> >> critical section,
> >> > > >> >> - an integer where to store the current CPU number, updated
> >> > > >> >> whenever
> >> > > >> >> the thread is preempted. This CPU number cache is not strictly
> >> > > >> >> needed, but performs better than getcpu vdso.
> >> > > >> >>
> >> > > >> >> This approach is inspired by Paul Turner and Andrew Hunter's work
> >> > > >> >> on percpu atomics, which lets the kernel handle restart of critical
> >> > > >> >> sections, ref.
> >> > > >> >> http://www.linuxplumbersconf.org/2013/ocw/system/presentations/1695/original/LPC%20-%20PerCpu%20Atomics.pdf
> >> > > >> >>
> >> > > >> >> What is done differently here compared to percpu atomics: we track
> >> > > >> >> a single nesting counter per thread rather than many ranges of
> >> > > >> >> instruction pointer values. We deliver a signal to user-space and
> >> > > >> >> let the logic of restart be handled in user-space, thus moving
> >> > > >> >> the complexity out of the kernel. The nesting counter approach
> >> > > >> >> allows us to skip the complexity of interacting with signals that
> >> > > >> >> would be otherwise needed with the percpu atomics approach, which
> >> > > >> >> needs to know which instruction pointers are preempted, including
> >> > > >> >> when preemption occurs on a signal handler nested over an
> >> > > >> >> instruction
> >> > > >> >> pointer of interest.
> >> > > >> >>
> >> > > >>
> >> > > >> I talked about this kind of thing with PeterZ at LSF/MM, and I was
> >> > > >> unable to convince myself that the kernel needs to help at all. To do
> >> > > >> this without kernel help, I want to relax the requirements slightly.
> >> > > >> With true per-cpu atomic sections, you have a guarantee that you are
> >> > > >> either really running on the same CPU for the entire duration of the
> >> > > >> atomic section or you abort. I propose a weaker primitive: you
> >> > > >> acquire one of an array of locks (probably one per cpu), and you are
> >> > > >> guaranteed that, if you don't abort, no one else acquires the same
> >> > > >> lock while you hold it.
> >> > > >
> >> > > > In my proof of concept (https://github.com/compudj/percpu-dev) I
> >> > > > actually implement an array of per-cpu lock. The issue here boils
> >> > > > down to grabbing this per-cpu lock efficiently. Once the lock is taken,
> >> > > > the thread has exclusive access to that per-cpu lock, even if it
> >> > > > migrates.
> >> > > >
> >> > > >> Here's how:
> >> > > >>
> >> > > >> Create an array of user-managed locks, one per cpu. Call them lock[i]
> >> > > >> for 0 <= i < ncpus.
> >> > > >>
> >> > > >> To acquire, look up your CPU number. Then, atomically, check that
> >> > > >> lock[cpu] isn't held and, if so, mark it held and record both your tid
> >> > > >> and your lock acquisition count. If you learn that the lock *was*
> >> > > >> held after all, signal the holder (with kill or your favorite other
> >> > > >> mechanism), telling it which lock acquisition count is being aborted.
> >> > > >> Then atomically steal the lock, but only if the lock acquisition count
> >> > > >> hasn't changed.
> >> > > >>
> >> > > >> This has a few benefits over the in-kernel approach:
> >> > > >>
> >> > > >> 1. No kernel patch.
> >> > > >>
> >> > > >> 2. No unnecessary abort if you are preempted in favor of a thread that
> >> > > >> doesn't content for your lock.
> >> > > >>
> >> > > >> 3. Greatly improved debuggability.
> >> > > >>
> >> > > >> 4. With long critical sections and heavy load, you can improve
> >> > > >> performance by having several locks per cpu and choosing one at
> >> > > >> random.
> >> > > >>
> >> > > >> Is there a reason that a scheme like this doesn't work?
> >> > > >
> >> > > > What do you mean exactly by "atomically check that lock is not
> >> > > > held and, if so, mark it held" ? Do you imply using a lock-prefixed
> >> > > > atomic operation ?
> >> > >
> >> > > Yes.
> >> > >
> >> > > >
> >> > > > The goal of this whole restart section approach is to allow grabbing
> >> > > > a lock (or doing other sequences of operations ending with a single
> >> > > > store) on per-cpu data without having to use slow lock-prefixed
> >> > > > atomic operations.
> >> > >
> >> > > Ah, ok, I assumed it was to allow multiple threads to work in parallel.
> >> > >
> >> > > How arch-specific are you willing to be?
> >> >
> >> > I'd want this to be usable on every major architectures.
> >> >
> >> > > On x86, it might be possible
> >> > > to play some GDT games so that an unlocked xchg relative
> >> >
> >> > AFAIK, there is no such thing as an unlocked xchg. xchg always
> >> > imply the lock prefix on x86. I guess you mean cmpxchg here.
> >> >
> >>
> >> Right, got my special cases mixed up.
> >>
> >> I wonder if we could instead have a vdso function that did something like:
> >>
> >> unsigned long __vdso_cpu_local_exchange(unsigned long *base, int
> >> shift, unsigned long newval)
> >> {
> >> unsigned long *ptr = base + (cpu << shift);
> >> unsigned long old = *ptr;
> >> *ptr = new;
> >> return *ptr;
> >> }
> >>
> >> I think this primitive would be sufficient to let user code do the
> >> rest. There might be other more simple primitives that would work.
> >> It could be implemented by fiddling with IP ranges, but we could
> >> change the implementation later without breaking anything. The only
> >> really hard part would be efficiently figuring out what CPU we're on.
> >
> > The "fiddling with IP ranges" is where the restart sections come into
> > play. Paul Turner's approach indeed knows about IP ranges, and performs
> > the restart from the kernel. My alternative approach uses a signal and
> > page protection in user-space to reach the same result. It appears that
> > CONFIG_PREEMPT kernels are difficult to handle with Paul's approach, so
> > perhaps we could combine our approaches to get the best of both.
>
> I'm not sure why CONFIG_PREEMPT would matter. What am I missing?
>
> Doing this in the vdso has some sneaky benefits: rather than aborting
> a very short vdso-based primitive on context switch, we could just fix
> it up in the kernel and skip ahead to the end.

I might be guilty of being too x86-centric here. On x86, as long as
the lock and unlock primitives are sufficiently atomic, everything
should be okay. On other architectures, though, a primitive that
gives lock, unlock, and abort of a per-cpu lock without checking that
you're still on the right cpu at unlock time may not be sufficient.
If the primitive is implemented purely with loads and stores, then
even if you take the lock, migrate, finish your work, and unlock
without anyone else contending from the lock (and hence don't abort),
the next thread to take the same lock will end up unsynchronized
unless there's appropriate memory ordering. For example, if taking
the lock were an acquire and unlocking were a release, we'd be fine.

Your RFC design certainly works (in principle -- I haven't looked at
the code in detail), but I can't shake the feeling that it's overkill
and that it could be improved to avoid unnecessary aborts every time
the lock holder is scheduled out.

This isn't a problem in your RFC design, but if we wanted to come up
with tighter primitives, we'd have to be quite careful to document
exactly what memory ordering guarantees they come with.

It may be that all architectures for which you care about the
performance boost already have efficient acquire and release
operations. Certainly x86 does, and I don't know how fast the new ARM
instructions are, but I imagine they're pretty good.

It's too bad that not all architectures have a single-instruction
unlocked compare-and-exchange.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/