Nope: looks the same. I've taken my test harness out of the picture and done
everything manually from the ground up, with the old tests and the new. Headline
is that I see similar numbers from both.
I took me a while to get really reproducible numbers on Intel. Most importantly:
* Set a fixed CPU frequency, disabling any boost and avoiding any
thermal throttling.
* Pin the test to CPUs and set a nice level.
I'm already pinning the test to cpu 0. But for M2, at least, I'm running in a VM
on top of macos, and I don't have a mechanism to pin the QEMU threads to the
physical CPUs. Anyway, I don't think these are problems because for a given
kernel build I can accurately repro numbers.
But yes: I was observing something similar on AMD EPYC, where you get
consecutive pages from the buddy, but once you allocate from the PCP it might no
longer be consecutive.
- test is 5-10% slower when output is printed to terminal vs when redirected to
file. I've always effectively been redirecting. Not sure if this overhead
could start to dominate the regression and that's why you don't see it?
That's weird, because we don't print while measuring? Anyhow, 5/10% variance on
some system is not the end of the world.
I imagine its cache effects? More work to do to print the output could be
evicting some code that's in the benchmark path?
I'm inclined to run this test for the last N kernel releases and if the number
moves around significantly, conclude that these tests don't really matter.
Otherwise its an exercise in randomly refactoring code until it works well, but
that's just overfitting to the compiler and hw. What do you think?
Personally, I wouldn't lose sleep if you see weird, unexplainable behavior on
some system (not even architecture!). Trying to optimize for that would indeed
be random refactorings.
But I would not be so fast to say that "these tests don't really matter" and
then go wild and degrade them as much as you want. There are use cases that care
about fork performance especially with order-0 pages -- such as Redis.
Indeed. But also remember that my fork baseline time is ~2.5ms, and I think you
said yours was 14ms :)
I'll continue to mess around with it until the end of the day. But I'm not
making any headway, then I'll change tack; I'll just measure the performance of
my contpte changes using your fork/zap stuff as the baseline and post based on that.