Re: [PATCH 0/4] Introduce QPW for per-cpu operations

From: Vlastimil Babka

Date: Wed Feb 11 2026 - 11:59:27 EST


On 2/11/26 17:50, Marcelo Tosatti wrote:
> On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote:
>> On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote:
>> > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote:
>> [...]
>> > > What about !PREEMPT_RT? We have people running isolated workloads and
>> > > these sorts of pcp disruptions are really unwelcome as well. They do not
>> > > have requirements as strong as RT workloads but the underlying
>> > > fundamental problem is the same. Frederic (now CCed) is working on
>> > > moving those pcp book keeping activities to be executed to the return to
>> > > the userspace which should be taking care of both RT and non-RT
>> > > configurations AFAICS.
>> >
>> > Michal,
>> >
>> > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel
>> > boot option qpw=y/n, which controls whether the behaviour will be
>> > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT).
>>
>> My bad. I've misread the config space of this.
>
> My bad, actually. Its only CONFIG_QPW on the current patchset.
>
>> > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock
>> > (and remote work via work_queue) is used.
>> >
>> > What "pcp book keeping activities" you refer to ? I don't see how
>> > moving certain activities that happen under SLUB or LRU spinlocks
>> > to happen before return to userspace changes things related
>> > to avoidance of CPU interruption ?
>>
>> Essentially delayed operations like pcp state flushing happens on return
>> to the userspace on isolated CPUs. No locking changes are required as
>> the work is still per-cpu.
>>
>> In other words the approach Frederic is working on is to not change the
>> locking of pcp delayed work but instead move that work into well defined
>> place - i.e. return to the userspace.
>>
>> Btw. have you measure the impact of preempt_disbale -> spinlock on hot
>> paths like SLUB sheeves?
>
> Nope, i have not. What is/are the standard benchmarks for SLUB/SLAB
> allocation ?

Those mentioned here, and I would say also netperf.
https://lore.kernel.org/all/20250913000935.1021068-1-sudarsanm@xxxxxxxxxx/