Re: [PATCH 0/4] Introduce QPW for per-cpu operations

From: Leonardo Bras

Date: Fri Feb 27 2026 - 20:23:56 EST


On Mon, Feb 23, 2026 at 10:06:32AM +0100, Michal Hocko wrote:
> On Fri 20-02-26 18:58:14, Leonardo Bras wrote:
> > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote:
> > > On Sat 14-02-26 19:02:19, Leonardo Bras wrote:
> > > > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote:
> > > > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote:
> > > > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote:
> > > > > [...]
> > > > > > > What about !PREEMPT_RT? We have people running isolated workloads and
> > > > > > > these sorts of pcp disruptions are really unwelcome as well. They do not
> > > > > > > have requirements as strong as RT workloads but the underlying
> > > > > > > fundamental problem is the same. Frederic (now CCed) is working on
> > > > > > > moving those pcp book keeping activities to be executed to the return to
> > > > > > > the userspace which should be taking care of both RT and non-RT
> > > > > > > configurations AFAICS.
> > > > > >
> > > > > > Michal,
> > > > > >
> > > > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel
> > > > > > boot option qpw=y/n, which controls whether the behaviour will be
> > > > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT).
> > > > >
> > > > > My bad. I've misread the config space of this.
> > > > >
> > > > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock
> > > > > > (and remote work via work_queue) is used.
> > > > > >
> > > > > > What "pcp book keeping activities" you refer to ? I don't see how
> > > > > > moving certain activities that happen under SLUB or LRU spinlocks
> > > > > > to happen before return to userspace changes things related
> > > > > > to avoidance of CPU interruption ?
> > > > >
> > > > > Essentially delayed operations like pcp state flushing happens on return
> > > > > to the userspace on isolated CPUs. No locking changes are required as
> > > > > the work is still per-cpu.
> > > > >
> > > > > In other words the approach Frederic is working on is to not change the
> > > > > locking of pcp delayed work but instead move that work into well defined
> > > > > place - i.e. return to the userspace.
> > > > >
> > > > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot
> > > > > paths like SLUB sheeves?
> > > >
> > > > Hi Michal,
> > > >
> > > > I have done some study on this (which I presented on Plumbers 2023):
> > > > https://lpc.events/event/17/contributions/1484/
> > > >
> > > > Since they are per-cpu spinlocks, and the remote operations are not that
> > > > frequent, as per design of the current approach, we are not supposed to see
> > > > contention (I was not able to detect contention even after stress testing
> > > > for weeks), nor relevant cacheline bouncing.
> > > >
> > > > That being said, for RT local_locks already get per-cpu spinlocks, so there
> > > > is only difference for !RT, which as you mention, does preemtp_disable():
> > > >
> > > > The performance impact noticed was mostly about jumping around in
> > > > executable code, as inlining spinlocks (test #2 on presentation) took care
> > > > of most of the added extra cycles, adding about 4-14 extra cycles per
> > > > lock/unlock cycle. (tested on memcg with kmalloc test)
> > > >
> > > > Yeah, as expected there is some extra cycles, as we are doing extra atomic
> > > > operations (even if in a local cacheline) in !RT case, but this could be
> > > > enabled only if the user thinks this is an ok cost for reducing
> > > > interruptions.
> > > >
> > > > What do you think?
> > >
> > > The fact that the behavior is opt-in for !RT is certainly a plus. I also
> > > do not expect the overhead to be really be really big.
> >
> > Awesome! Thanks for reviewing!
> >
> > > To me, a much
> > > more important question is which of the two approaches is easier to
> > > maintain long term. The pcp work needs to be done one way or the other.
> > > Whether we want to tweak locking or do it at a very well defined time is
> > > the bigger question.
> >
> > That crossed my mind as well, and I went with the idea of changing locking
> > because I was working on workloads in which deferring work to a kernel
> > re-entry would cause deadline misses as well. Or more critically, the
> > drains could take forever, as some of those tasks would avoid returning to
> > kernel as much as possible.
>
> Could you be more specific please?

Hi Michal,
Sorry for the delay

I think Marcelo covered some of the main topics earlier in this
thread:

https://lore.kernel.org/all/aZ3ejedS7nE5mnva@tpad/

But in syntax:
- There are workloads that are projected not avoid as much as possible
return to kernelspace, as they are either cpu intensive, or latency
sensitive (RT workloads) such as low-latency automation.

There are scenarios such as industrial automation in which
the applications are supposed to reply a request in less than 50us since it
was generated (IIRC), so sched-out, dealing with interruptions, or syscalls
are a no-go. In those cases, using cpu isolation is a must, and since it
can stay really long running in userspace, it may take a very long time to
do any syscall to actually perform the scheduled flush.

- Other workloads may need to use syscalls, or rely in interrupts, such as
HPC, but it's also not interesting to take long on them, as the time spent
there is time not used for processing the required data.

Let's say that for the sake of cpu isolation, a lot of different
requests made to given isolated cpu are batched to be run on syscall
entry/exit. It means the next syscall may take much longer than
usual.
- This may break other RT workloads such as sensor/sound/image sampling,
which could be generally ok with some of the faster syscalls for their
application, and now may perceive an error because one of those syscalls
took too long.

While the qpw approach may cost a few extra cycles, it operates remotelly
and makes the system a bit more predictable.

Also, when I was planning the mechanism, I remember it was meant to add
zero overhead in case of CONFIG_QPW=n, very little overhead in case of
CONFIG_QPW=y + qpw=0 (a couple of static branches, possibly with the
cost removed by the cpu branch predictor), and only add a few cycles in
case of qpw=1 + !RT. Which means we may be missing just a few adjustments
to get there.

BTW, if the numbers are not that great for your workloads, we could take a
look at adding an extra QPW mode in which local_locks are taken in
the fastpath and it allows the flush wq to be posponed to that point in
syscall return that you mentioned. What I mean is that we don't need to be
limitted to choosing between solutions, but instead allow the user (or
distro) to choose the desired behavior.

Thanks!
Leo