Re: [PATCH] sched: Further restrict the preemption modes
From: Ciunas Bennett
Date: Tue Feb 24 2026 - 10:52:35 EST
On 19/12/2025 10:15, Peter Zijlstra wrote:
Hi Peter,
We are observing a performance regression on s390 since enabling PREEMPT_LAZY.
Test Environment
Architecture: s390
Setup:
Single KVM host running two identical guests
Guests are connected virtually via Open vSwitch
Workload: uperf streaming read test with 50 parallel connections
One guest acts as the uperf client, the other as the server
Open vSwitch configuration:
OVS bridge with two ports
Guests attached via virtio‑net
Each guest configured with 4 vhost‑queues
Problem Description
When comparing PREEMPT_LAZY against full PREEMPT, we see a substantial drop in throughput—on some systems up to 50%.
Observed Behaviour
By tracing packets inside Open vSwitch (ovs_do_execute_action), we see:
Packet drops
Retransmissions
Reductions in packet size (from 64K down to 32K)
Capturing traffic inside the VM and inspecting it in Wireshark shows the following TCP‑level differences between PREEMPT_FULL and PREEMPT_LAZY:
|--------------------------------------+--------------+--------------+------------------|
| Wireshark Warning / Note | PREEMPT_FULL | PREEMPT_LAZY | (lazy vs full) |
|--------------------------------------+--------------+--------------+------------------|
| D-SACK Sequence | 309 | 2603 | ×8.4 |
| Partial Acknowledgement of a segment | 54 | 279 | ×5.2 |
| Ambiguous ACK (Karn) | 32 | 747 | ×23 |
| (Suspected) spurious retransmission | 205 | 857 | ×4.2 |
| (Suspected) fast retransmission | 54 | 1622 | ×30 |
| Duplicate ACK | 504 | 3446 | ×6.8 |
| Packet length exceeds MSS (TSO/GRO) | 13172 | 34790 | ×2.6 |
| Previous segment(s) not captured | 9205 | 6730 | -27% |
| ACKed segment that wasn't captured | 7022 | 8272 | +18% |
| (Suspected) out-of-order segment | 436 | 303 | -31% |
|--------------------------------------+--------------+--------------+------------------|
This pattern indicates reordering, loss, or scheduling‑related delays, but it is still unclear why PREEMPT_LAZY is causing this behaviour in this workload.
Additional observations:
Monitoring the guest CPU run time shows that it drops from 16% with PREEMPT_FULL to 9% with PREEMPT_LAZY.
The workload is dominated by voluntary preemption (schedule()), and PREEMPT_LAZY is, as far as I understand, mainly concerned with forced preemption.
It is therefore not obvious why PREEMPT_LAZY has an impact here.
Changing guest configuration to disable mergeable RX buffers:
<host mrg_rxbuf="off"/>
had a clear effect on throughput:
PREEMPT_LAZY: throughput improved from 40 Gb/s → 60 Gb/s