Re: [PATCH v7] sched: Consolidate cpufreq updates

From: Anjali K
Date: Mon Oct 07 2024 - 13:20:52 EST


Hi, I tested this patch to see if it causes any regressions on bare-metal power9 systems with microbenchmarks.
The test system is a 2 NUMA node 128 cpu powernv power9 system. The conservative governor is enabled.
I took the baseline as the 6.10.0-rc1 tip sched/core kernel.
No regressions were found.

+------------------------------------------------------+--------------------+----------+
|                     Benchmark                        |      Baseline      | Baseline |
|                                                      |  (6.10.0-rc1 tip   | + patch  |
|                                                      |  sched/core)       |          |
+------------------------------------------------------+--------------------+----------+
|Hackbench run duration (sec)                          |         1          |   1.01   |
|Lmbench simple fstat (usec)                           |         1          |   0.99   |
|Lmbench simple open/close (usec)                      |         1          |   1.02   |
|Lmbench simple read (usec)                            |         1          |   1      |
|Lmbench simple stat (usec)                            |         1          |   1.01   |
|Lmbench simple syscall (usec)                         |         1          |   1.01   |
|Lmbench simple write (usec)                           |         1          |   1      |
|stressng (bogo ops)                                   |         1          |   0.94   |
|Unixbench execl throughput (lps)                      |         1          |   0.97   |
|Unixbench Pipebased Context Switching throughput (lps)|         1          |   0.94   |
|Unixbench Process Creation (lps)                      |         1          |   1      |
|Unixbench Shell Scripts (1 concurrent) (lpm)          |         1          |   1      |
|Unixbench Shell Scripts (8 concurrent) (lpm)          |         1          |   1.01   |
+------------------------------------------------------+--------------------+----------+

Thank you,
Anjali K