Re: [linus:master] [timers] 7ee9887703: stress-ng.uprobe.ops_per_sec -17.1% regression
From: Anna-Maria Behnsen
Date: Thu Apr 25 2024 - 04:23:30 EST
Hi,
(adding cpuidle/power people to cc-list)
Oliver Sang <oliver.sang@xxxxxxxxx> writes:
> hi, Frederic Weisbecker,
>
> On Tue, Apr 02, 2024 at 12:46:15AM +0200, Frederic Weisbecker wrote:
>> Le Wed, Mar 27, 2024 at 04:39:17PM +0800, kernel test robot a écrit :
>> >
>> >
>> > Hello,
>> >
>> >
>> > we reported
>> > "[tip:timers/core] [timers] 7ee9887703: netperf.Throughput_Mbps -1.2% regression"
>> > in
>> > https://lore.kernel.org/all/202403011511.24defbbd-oliver.sang@xxxxxxxxx/
>> >
>> > now we noticed this commit is in mainline and we captured further results.
>> >
>> > still include netperf results for complete. below details FYI.
>> >
>> >
>> > kernel test robot noticed a -17.1% regression of stress-ng.uprobe.ops_per_sec
>> > on:
>>
>> The good news is that I can reproduce.
>> It has made me spot something already:
>>
>> https://lore.kernel.org/lkml/ZgsynV536q1L17IS@xxxxxxxxxxxxx/T/#m28c37a943fdbcbadf0332cf9c32c350c74c403b0
>>
>> But that's not enough to fix the regression. Investigation continues...
>
> Thanks a lot for information! if you want us test any patch, please let us know.
Oliver, I would be happy to see, whether the patch at the end of the
message restores the original behaviour also in your test setup. I
applied it on 6.9-rc4. This patch is not a fix - it is just a pointer to
the kernel path, that might cause the regression. I know, it is
probable, that a warning in tick_sched is triggered. This happens when
the first timer is alredy in the past. I didn't add an extra check when
creating the 'defacto' timer thingy. But existing code handles this
problem already properly. So the warning could be ignored here.
For the cpuidle people, let me explain what I oberserved, my resulting
assumption and my request for help:
cpuidle governors use expected sleep length values (beside other data)
to decide which idle state would be good to enter. The expected sleep
length takes the first queued timer of the CPU into account and is
provided by tick_nohz_get_sleep_length(). With the timer pull model in
place the non pinned timers are not taken into account when there are
other CPUs up and running which could handle those timers. This could
lead to increased sleep length values. On my system during the stress-ng
uprobes test it was in the range of maximum 100us without the patch set
and with the patch set the maximum was in a range of 200sec. This is
intended behaviour, because timers which could expire on any CPU should
expire on the CPU which is busy anyway and the non busy CPU should be
able to go idle.
Those increased sleep length values were the only anomalies I could find
in the traces with the regression.
I created the patch below which simply fakes the sleep length values
that they take all timers of the CPU into account (also the non
pinned). This patch kind of restores the behavoir of
tick_nohz_get_sleep_length() before the change but still with the timer
pull model in place.
With the patch the regression was gone, at least on my system (using
cpuidle governor menu but also teo).
So my assumption here is, that cpuidle governors assume that a deeper
idle state could be choosen and selecting the deeper idle state makes an
overhead when returning from idle. But I have to notice here, that I'm
still not familiar with cpuidle internals... So I would be happy about
some hints how I can debug/trace cpuidle internals to falsify or verify
this assumption.
Thanks,
Anna-Maria
---8<----
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 3baf2fbe6848..c0e62c365355 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -2027,7 +2027,8 @@ static unsigned long next_timer_interrupt(struct timer_base *base,
static unsigned long fetch_next_timer_interrupt(unsigned long basej, u64 basem,
struct timer_base *base_local,
struct timer_base *base_global,
- struct timer_events *tevt)
+ struct timer_events *tevt,
+ struct timer_events *defacto)
{
unsigned long nextevt, nextevt_local, nextevt_global;
bool local_first;
@@ -2035,6 +2036,14 @@ static unsigned long fetch_next_timer_interrupt(unsigned long basej, u64 basem,
nextevt_local = next_timer_interrupt(base_local, basej);
nextevt_global = next_timer_interrupt(base_global, basej);
+ if (defacto) {
+ if (base_global->timers_pending)
+ defacto->global = basem + (u64)(nextevt_global - basej) * TICK_NSEC;
+
+ if (base_local->timers_pending)
+ defacto->local = basem + (u64)(nextevt_local - basej) * TICK_NSEC;
+ }
+
local_first = time_before_eq(nextevt_local, nextevt_global);
nextevt = local_first ? nextevt_local : nextevt_global;
@@ -2113,7 +2122,7 @@ void fetch_next_timer_interrupt_remote(unsigned long basej, u64 basem,
lockdep_assert_held(&base_local->lock);
lockdep_assert_held(&base_global->lock);
- fetch_next_timer_interrupt(basej, basem, base_local, base_global, tevt);
+ fetch_next_timer_interrupt(basej, basem, base_local, base_global, tevt, NULL);
}
/**
@@ -2228,6 +2237,7 @@ static void timer_use_tmigr(unsigned long basej, u64 basem,
static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem,
bool *idle)
{
+ struct timer_events defacto = { .local = KTIME_MAX, .global = KTIME_MAX };
struct timer_events tevt = { .local = KTIME_MAX, .global = KTIME_MAX };
struct timer_base *base_local, *base_global;
unsigned long nextevt;
@@ -2250,7 +2260,7 @@ static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem,
raw_spin_lock_nested(&base_global->lock, SINGLE_DEPTH_NESTING);
nextevt = fetch_next_timer_interrupt(basej, basem, base_local,
- base_global, &tevt);
+ base_global, &tevt, &defacto);
/*
* If the next event is only one jiffie ahead there is no need to call
@@ -2319,6 +2329,7 @@ static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem,
raw_spin_unlock(&base_global->lock);
raw_spin_unlock(&base_local->lock);
+ tevt.local = min_t(u64, defacto.local, defacto.global);
return cmp_next_hrtimer_event(basem, tevt.local);
}