Re: [PATCH 0/4] Introduce QPW for per-cpu operations
From: Marcelo Tosatti
Date: Thu Feb 19 2026 - 10:28:59 EST
On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote:
> On Sat 14-02-26 19:02:19, Leonardo Bras wrote:
> > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote:
> > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote:
> > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote:
> > > [...]
> > > > > What about !PREEMPT_RT? We have people running isolated workloads and
> > > > > these sorts of pcp disruptions are really unwelcome as well. They do not
> > > > > have requirements as strong as RT workloads but the underlying
> > > > > fundamental problem is the same. Frederic (now CCed) is working on
> > > > > moving those pcp book keeping activities to be executed to the return to
> > > > > the userspace which should be taking care of both RT and non-RT
> > > > > configurations AFAICS.
> > > >
> > > > Michal,
> > > >
> > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel
> > > > boot option qpw=y/n, which controls whether the behaviour will be
> > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT).
> > >
> > > My bad. I've misread the config space of this.
> > >
> > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock
> > > > (and remote work via work_queue) is used.
> > > >
> > > > What "pcp book keeping activities" you refer to ? I don't see how
> > > > moving certain activities that happen under SLUB or LRU spinlocks
> > > > to happen before return to userspace changes things related
> > > > to avoidance of CPU interruption ?
> > >
> > > Essentially delayed operations like pcp state flushing happens on return
> > > to the userspace on isolated CPUs. No locking changes are required as
> > > the work is still per-cpu.
> > >
> > > In other words the approach Frederic is working on is to not change the
> > > locking of pcp delayed work but instead move that work into well defined
> > > place - i.e. return to the userspace.
> > >
> > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot
> > > paths like SLUB sheeves?
> >
> > Hi Michal,
> >
> > I have done some study on this (which I presented on Plumbers 2023):
> > https://lpc.events/event/17/contributions/1484/
> >
> > Since they are per-cpu spinlocks, and the remote operations are not that
> > frequent, as per design of the current approach, we are not supposed to see
> > contention (I was not able to detect contention even after stress testing
> > for weeks), nor relevant cacheline bouncing.
> >
> > That being said, for RT local_locks already get per-cpu spinlocks, so there
> > is only difference for !RT, which as you mention, does preemtp_disable():
> >
> > The performance impact noticed was mostly about jumping around in
> > executable code, as inlining spinlocks (test #2 on presentation) took care
> > of most of the added extra cycles, adding about 4-14 extra cycles per
> > lock/unlock cycle. (tested on memcg with kmalloc test)
> >
> > Yeah, as expected there is some extra cycles, as we are doing extra atomic
> > operations (even if in a local cacheline) in !RT case, but this could be
> > enabled only if the user thinks this is an ok cost for reducing
> > interruptions.
> >
> > What do you think?
>
> The fact that the behavior is opt-in for !RT is certainly a plus. I also
> do not expect the overhead to be really be really big. To me, a much
> more important question is which of the two approaches is easier to
> maintain long term. The pcp work needs to be done one way or the other.
> Whether we want to tweak locking or do it at a very well defined time is
> the bigger question.
> --
> Michal Hocko
> SUSE Labs
Michal,
Again, i don't see how moving operations to happen at return to
kernel would help (assuming you are talking about
"context_tracking,x86: Defer some IPIs until a user->kernel transition").
The IPIs in the patchset above can be deferred until user->kernel
transition because they are TLB flushes, for addresses which do not
exist on the address space mapping in userspace.
What are the per-CPU objects in SLUB ?
struct slab_sheaf {
union {
struct rcu_head rcu_head;
struct list_head barn_list;
/* only used for prefilled sheafs */
struct {
unsigned int capacity;
bool pfmemalloc;
};
};
struct kmem_cache *cache;
unsigned int size;
int node; /* only used for rcu_sheaf */
void *objects[];
};
struct slub_percpu_sheaves {
local_trylock_t lock;
struct slab_sheaf *main; /* never NULL when unlocked */
struct slab_sheaf *spare; /* empty or full, may be NULL */
struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */
};
Examples of local CPU operation that manipulates the data structures:
1) kmalloc, allocates an object from local per CPU list.
2) kfree, returns an object to local per CPU list.
Examples of an operation that would perform changes on the per-CPU lists
remotely:
kmem_cache_shrink (cache shutdown), kmem_cache_shrink.
You can't delay either kmalloc (removal of object from per-CPU freelist),
or kfree (return of object from per-CPU freelist), or kmem_cache_shrink
or kmem_cache_shrink to return to userspace.
What i missing something here? (or do you have something on your mind
which i can't see).