Re: [PATCH v2 2/5] Introducing qpw_lock() and per-cpu queue & flush work
From: Marcelo Tosatti
Date: Thu Mar 05 2026 - 20:48:19 EST
On Tue, Mar 03, 2026 at 01:03:36PM +0100, Vlastimil Babka (SUSE) wrote:
> On 3/2/26 16:49, Marcelo Tosatti wrote:
> > +#define local_qpw_lock(lock) \
> > + do { \
> > + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) { \
> > + migrate_disable(); \
>
> Have you considered using migrate_disable() on PREEMPT_RT and
> preempt_disable() on !PREEMPT_RT since it's cheaper? It's what the pcp
> locking in mm/page_alloc.c does, for that reason. It should reduce the
> overhead with qpw=1 on !PREEMPT_RT.
migrate_disable:
Patched kernel, CONFIG_QPW=y, qpw=1: 192 cycles
preempt_disable:
[ 65.497223] kmalloc_bench: Avg cycles per kmalloc: 184 cycles
I tried it before, but it was crashing for some reason which i didnt
look into (perhaps PREEMPT_RT was enabled).
Will change this for the next iteration, thanks.