Re: [PATCH] sched/fair: Reschedule the cfs_rq when current is ineligible

From: Honglei Wang
Date: Tue Jun 11 2024 - 07:53:53 EST




On 2024/6/6 20:39, Chunxin Zang wrote:


Hi honglei

Recently, I conducted testing of multiple cgroups using version 2. Version 2 ensures the
RUN_TO_PARITY feature, so the test results are somewhat better under the
NO_RUN_TO_PARITY feature.
https://lore.kernel.org/lkml/20240529141806.16029-1-spring.cxz@xxxxxxxxx/T/

The testing environment I used still employed 4 cores, 4 groups of hackbench (160 processes)
and 1 cyclictest. If too many cgroups or processes are created on the 4 cores, the test
results will fluctuate severely, making it difficult to discern any differences.

The organization of cgroups was in two forms:
1. Within the same level cgroup, 10 sub-cgroups were created, with each cgroup having
an average of 16 processes.

EEVDF PATCH EEVDF-NO_PARITY PATCH-NO_PARITY

LNICE(-19) # Avg Latencies: 00572 00347 00502 00218

LNICE(0) # Avg Latencies: 02262 02225 02442 02321

LNICE(19) # Avg Latencies: 03132 03422 03333 03489

2. In the form of a binary tree, 8 leaf cgroups were established, with a depth of 4.
On average, each cgroup had 20 processes

EEVDF PATCH EEVDF-NO_PARITY PATCH-NO_PARITY

LNICE(-19) # Avg Latencies: 00601 00592 00510 00400

LNICE(0) # Avg Latencies: 02703 02170 02381 02126

LNICE(19) # Avg Latencies: 04773 03387 04478 03611

Based on the test results, there is a noticeable improvement in scheduling latency after
applying the patch in scenarios involving multiple cgroups.


thanks
Chunxin

Hi Chunxin,

Thanks for sharing the test result. It looks helpful at least in this cgroups scenario. I'm still curious which point of the two changes helps more in your test, just as mentioned at the very first mail of this thread.

Thanks,
Honglei