Re: [PATCH 0/4] Introduce QPW for per-cpu operations

From: Marcelo Tosatti

Date: Mon Mar 02 2026 - 11:14:33 EST


On Mon, Feb 23, 2026 at 07:09:47PM +0100, Vlastimil Babka wrote:
> On 2/20/26 17:55, Marcelo Tosatti wrote:
> >
> > #include <linux/module.h>
> > #include <linux/kernel.h>
> > #include <linux/slab.h>
> > #include <linux/timex.h>
> > #include <linux/preempt.h>
> > #include <linux/irqflags.h>
> > #include <linux/vmalloc.h>
> >
> > MODULE_LICENSE("GPL");
> > MODULE_AUTHOR("Gemini AI");
> > MODULE_DESCRIPTION("A simple kmalloc performance benchmark");
> >
> > static int size = 64; // Default allocation size in bytes
> > module_param(size, int, 0644);
> >
> > static int iterations = 1000000; // Default number of iterations
> > module_param(iterations, int, 0644);
> >
> > static int __init kmalloc_bench_init(void) {
> > void **ptrs;
> > cycles_t start, end;
> > uint64_t total_cycles;
> > int i;
> > pr_info("kmalloc_bench: Starting test (size=%d, iterations=%d)\n", size, iterations);
> >
> > // Allocate an array to store pointers to avoid immediate kfree-reuse optimization
> > ptrs = vmalloc(sizeof(void *) * iterations);
> > if (!ptrs) {
> > pr_err("kmalloc_bench: Failed to allocate pointer array\n");
> > return -ENOMEM;
> > }
> >
> > preempt_disable();
> > start = get_cycles();
> >
> > for (i = 0; i < iterations; i++) {
> > ptrs[i] = kmalloc(size, GFP_ATOMIC);
> > }
> >
> > end = get_cycles();
> >
> > total_cycles = end - start;
> > preempt_enable();
>
> While preempt_disable() simplifies things, it can misrepresent the cost of
> preempt_disable() that's part of the locking - that will become nested and
> then the nested preempt_disable() is typically cheaper, etc.
>
> Also the way it kmallocs all iterations and then kfree all iterations may
> skew the probabilities of fastpaths, cache hotness etc.
>
> When introducing sheaves I had a similar microbenchmark, but there was
> different amounts of inner-loop iteraions, no outer preempt_disable(), and
> linear vs randomized array. See:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/commit/?h=slub-percpu-sheaves-v6-benchmarking&id=04028eeffba18a4f821a7194bc9d14f7488bd7d9
>
> (at this point the SLUB_HAS_SHEAVES parts should be removed and the
> kmem_cache_print_stats() stuff also shouldn't be interesting for QPW
> evaluation).

Hi Vlastimil,

There is a problem which the numbers vary significantly across runs
(on the same kernel, system is idle, cpu is isolated).

SLUB_HAS_SHEAVES is not defined on my build. Just copied slub_kunit.c
from slub-percpu-sheaves-v6-benchmarking
to current tip (and dropped call to kmem_cache_print_stats).

1st run:
[ 635.059928] average (excl. iter 0): 56571797
[ 635.235206] average (excl. iter 0): 58329901
[ 635.409957] average (excl. iter 0): 57459678
[ 635.585128] average (excl. iter 0): 58268333
[ 635.767325] average (excl. iter 0): 60063837
[ 635.944534] average (excl. iter 0): 58912817
[ 636.154503] average (excl. iter 0): 68992131
[ 636.362533] average (excl. iter 0): 69030629
[ 636.536737] average (excl. iter 0): 56545622
[ 636.704314] average (excl. iter 0): 55536407
[ 636.879097] average (excl. iter 0): 57397803
[ 637.051157] average (excl. iter 0): 57021907
[ 637.296352] average (excl. iter 0): 81582815
[ 637.539810] average (excl. iter 0): 81126686

2nd run:
[ 662.824688] average (excl. iter 0): 56833529
[ 662.996742] average (excl. iter 0): 57145388
[ 663.167063] average (excl. iter 0): 55828870
[ 663.339814] average (excl. iter 0): 57505312
[ 663.514563] average (excl. iter 0): 57374528
[ 663.690328] average (excl. iter 0): 57282062
[ 663.896128] average (excl. iter 0): 68097440
[ 664.103029] average (excl. iter 0): 69263914
[ 664.276497] average (excl. iter 0): 57073271
[ 664.442210] average (excl. iter 0): 54895879
[ 664.617186] average (excl. iter 0): 56972700
[ 664.787353] average (excl. iter 0): 56457173
[ 665.028944] average (excl. iter 0): 80339269
[ 665.268597] average (excl. iter 0): 80371907

3rd run:
[ 716.278750] average (excl. iter 0): 54191777
[ 716.442014] average (excl. iter 0): 54151132
[ 716.605254] average (excl. iter 0): 53148722
[ 716.766461] average (excl. iter 0): 53204894
[ 716.933339] average (excl. iter 0): 54719251
[ 717.098761] average (excl. iter 0): 54922923
[ 717.296178] average (excl. iter 0): 65351864
[ 717.491440] average (excl. iter 0): 65264027
[ 717.660778] average (excl. iter 0): 54370768
[ 717.823625] average (excl. iter 0): 54137410
[ 717.988983] average (excl. iter 0): 54222488
[ 718.152716] average (excl. iter 0): 54339019
[ 718.387978] average (excl. iter 0): 78249026
[ 718.619598] average (excl. iter 0): 77746198

Increasing total parameter from 10^6 to 10^7 does
not help:

1st run:
[ 1074.601686] average (excl. iter 0): 650711901
[ 1076.450880] average (excl. iter 0): 633014260
[ 1078.363300] average (excl. iter 0): 660440649
[ 1080.266134] average (excl. iter 0): 652695083
[ 1082.117007] average (excl. iter 0): 635632144
[ 1084.009277] average (excl. iter 0): 654270513
[ 1086.286343] average (excl. iter 0): 790520038
[ 1088.512516] average (excl. iter 0): 768071705
[ 1090.448161] average (excl. iter 0): 664564330
[ 1092.349683] average (excl. iter 0): 659016349
[ 1094.274099] average (excl. iter 0): 662388982
[ 1096.172362] average (excl. iter 0): 647972747
[ 1098.753304] average (excl. iter 0): 887576313
[ 1101.339897] average (excl. iter 0): 885102019

2nd run:
[ 1120.186284] average (excl. iter 0): 615756734
[ 1122.019323] average (excl. iter 0): 623846524
[ 1123.885801] average (excl. iter 0): 639124895
[ 1125.693617] average (excl. iter 0): 623667563
[ 1127.588515] average (excl. iter 0): 646441510
[ 1129.410285] average (excl. iter 0): 628291996
[ 1131.542157] average (excl. iter 0): 728497604
[ 1133.698744] average (excl. iter 0): 743717953
[ 1135.514112] average (excl. iter 0): 616621660
[ 1137.306874] average (excl. iter 0): 615863807
[ 1139.110637] average (excl. iter 0): 616425899
[ 1140.948769] average (excl. iter 0): 638115570
[ 1143.426557] average (excl. iter 0): 847799304
[ 1145.914827] average (excl. iter 0): 861180802

Will switch back to the simple test (and its pretty obvious
from the patch itself that if qpw=0 the overhead should
be zero, and it is). Its numbers are more
stable across runs.