Re: [PATCH 0/1] sched: Restore PREEMPT_NONE as default

From: Salvatore Dipietro

Date: Wed Apr 08 2026 - 16:15:26 EST


On 2026-04-04 17:42 UTC, Andres Freund wrote:
> Salvatore, could you repeat that benchmark in some variations?
> 1) Use huge pages

Enabling Transparent Huge page on the system and on the Postgres configuration,
the regression disappears and both system has a throughput in the 185k tps range.
Looking at /proc/vmstat, I can see high volume of minor fault page rate that
indicate the memory pressure when HP are not enabled.

| Instance | Arch | Baseline | Preempt None | Ratio |
| | | | | |
| m8g.24xlarge | ARM | 186,664.56 | 189,934.34 | 1.01x |


On 2026-04-05 1:40 UTC, Andres Freund wrote:
> Now, this machine is smaller and a different arch, so who knows.

To compare the results, I have run the same reproducer with huge page off on
different instances architecture and size. I can see that, in most of the cases,
regression is present. In particular for Graviton, increasing the instance
size, increases the regression as well since it creates more contention on
the resources.

| Instance | Arch | Baseline | Preempt None | Ratio |
| | | | | |
| m8g.2xlarge | ARM | 23,438.98 | 21,378.73 | 0.91x |
| m8g.4xlarge | ARM | 40,843.86 | 42,496.78 | 1.04x |
| m8g.8xlarge | ARM | 49,096.64 | 85,796.66 | 1.75x |
| | | | | |
| m7i.2xlarge | x86 | 16,615.54 | 23,381.16 | 1.40x |
| m7i.4xlarge | x86 | 28,759.26 | 32,758.62 | 1.14x |
| m7i.8xlarge | x86 | 73,456.28 | 83,419.36 | 1.14x |
| m7i.24xlarge | x86 | 63,489.67 | 67,314.40 | 1.06x |



On 2026-04-05 1:40 UTC, Andres Freund wrote:
> Could you run something like the following while the benchmark is running:
> SELECT backend_type, wait_event_type, wait_event, state, count(*) FROM pg_stat_activity where wait_event_type NOT IN ('Activity') GROUP BY backend_type, wait_event_type, wait_event, state order by count(*) desc \watch 1
> and show what you see at the time your profile shows the bad contention?

On baseline, I constantly see SpinDelay as first record with significantly higher count
than the other wait event types while, with the patch, WALWrite is constantly
the first record.

Baseline:

backend_type | wait_event_type | wait_event | state | count
----------------+-----------------+----------------------+---------------------+-------
client backend | Timeout | SpinDelay | active | 838
client backend | LWLock | WALWrite | idle in transaction | 10
client backend | Client | ClientRead | idle in transaction | 4
client backend | LWLock | WALWrite | active | 3
client backend | Timeout | SpinDelay | idle | 2
client backend | Client | ClientRead | idle | 1
client backend | Client | ClientRead | active | 1
client backend | IO | WalSync | idle in transaction | 1
checkpointer | Timeout | CheckpointWriteDelay | | 1
(9 rows)


With patch (PREEMPT_NONE):


backend_type | wait_event_type | wait_event | state | count
----------------+-----------------+----------------------+---------------------+-------
client backend | LWLock | WALWrite | active | 922
client backend | IPC | ProcarrayGroupUpdate | active | 26
client backend | Client | ClientRead | active | 24
client backend | IO | DataFileRead | active | 11
client backend | LWLock | WALWrite | idle | 5
client backend | Timeout | SpinDelay | active | 4
client backend | IO | DataFileWrite | active | 3
client backend | IO | WalSync | active | 2
client backend | LWLock | WALWrite | idle in transaction | 1
walwriter | LWLock | WALWrite | | 1
checkpointer | IO | DataFileSync | | 1
client backend | IO | DataFileRead | idle | 1
(12 rows)




On 2026-04-05 14:44 UTC, Mitsumasa KONDO wrote:
> That said, this change is likely to cause similar breakage in other
> user-space applications beyond PostgreSQL that rely on lightweight
> spin loops on arm64. So I agree that the patch to retain PREEMPT_NONE
> is the right approach. At the same time, this is also something that
> distributions can resolve by patching their default kernel configuration.

That's correct in my view. PostgreSQL is where we first noticed the regression
but, it is probable that it is not limited to this application only.


On 2026-04-06 1:46 UTC, Mitsumasa KONDO wrote:
> Also worth noting: Salvatore's environment is an EC2 instance
> (m8g.24xlarge), not bare metal. Hypervisor-level vCPU scheduling
> adds another layer on top of PREEMPT_LAZY -- a lock holder can be
> descheduled not only by the kernel scheduler but also by the
> hypervisor, and the guest kernel has no visibility into this. This
> could amplify the regression in ways that are not reproducible on
> bare-metal systems, regardless of architecture.

I run it against metal system of the same instance size (m8g.metal-24xl)
and results are similar. This suggests that the hypervisor does not add
significant overhead to the regression on single-socket benchmarks.


| Instance | Arch | Baseline | Preempt None | Ratio |
| | | | | |
| m8g.metal-24xl | ARM | 61,489.83 | 90,225.66 | 1.47x |



On 2026-04-07 11:19 UTC, Mark Rutland wrote:
> Salvatore, was there a specific reason to test with PG_HUGE_PAGES=off
> rather than PG_HUGE_PAGES=try?

We test with various configurations to ensure customers don't encounter
regressions regardless of their setup choices, even if some configurations
aren't optimal for maximum performance.




AMAZON DEVELOPMENT CENTER ITALY SRL, viale Monte Grappa 3/5, 20124 Milano, Italia, Registro delle Imprese di Milano Monza Brianza Lodi REA n. 2504859, Capitale Sociale: 10.000 EUR i.v., Cod. Fisc. e P.IVA 10100050961, Societa con Socio Unico