Re: [tip: timers/urgent] clockevents: Add missing resets of the next_event_forced flag

From: Linux regression tracking (Thorsten Leemhuis)

Date: Sun Apr 19 2026 - 11:12:43 EST


On 4/16/26 21:26, tip-bot2 for Thomas Gleixner wrote:
> The following commit has been merged into the timers/urgent branch of tip:
>
> Commit-ID: 4096fd0e8eaea13ebe5206700b33f49635ae18e5
> Gitweb: https://git.kernel.org/tip/4096fd0e8eaea13ebe5206700b33f49635ae18e5
> Author: Thomas Gleixner <tglx@xxxxxxxxxx>
> AuthorDate: Tue, 14 Apr 2026 22:55:01 +02:00
> Committer: Thomas Gleixner <tglx@xxxxxxxxxx>
> CommitterDate: Thu, 16 Apr 2026 21:22:04 +02:00
>
> clockevents: Add missing resets of the next_event_forced flag

Just wondering: what's the plan to mainline this? I wonder if this is
worth mainlining rather quickly and the tell the stable team right
afterwards to queue it up for 7.0.1, as in addition to the two affected
people in this thread (one of which stated that "several users from
CachyOS reported this regression as well") I noticed three more 7.0 bug
reports in the past few days that likely are fixed by the quoted patch:

https://gitlab.freedesktop.org/drm/amd/-/work_items/5178#note_3432195
https://bugzilla.kernel.org/show_bug.cgi?id=221370
https://bugzilla.kernel.org/show_bug.cgi?id=221377

Ciao, Thorsten

> The prevention mechanism against timer interrupt starvation missed to reset
> the next_event_forced flag in a couple of places:
>
> - When the clock event state changes. That can cause the flag to be
> stale over a shutdown/startup sequence
>
> - When a non-forced event is armed, which then prevents rearming before
> that event. If that event is far out in the future this will cause
> missed timer interrupts.
>
> - In the suspend wakeup handler.
>
> That led to stalls which have been reported by several people.
>
> Add the missing resets, which fixes the problems for the reporters.
>
> Fixes: d6e152d905bd ("clockevents: Prevent timer interrupt starvation")
> Reported-by: Hanabishi <i.r.e.c.c.a.k.u.n+kernel.org@xxxxxxxxx>
> Reported-by: Eric Naim <dnaim@xxxxxxxxxxx>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxx>
> Tested-by: Hanabishi <i.r.e.c.c.a.k.u.n+kernel.org@xxxxxxxxx>
> Tested-by: Eric Naim <dnaim@xxxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Closes: https://lore.kernel.org/68d1e9ac-2780-4be3-8ee3-0788062dd3a4@xxxxxxxxx
> Link: https://patch.msgid.link/87340xfeje.ffs@tglx
> ---
> kernel/time/clockevents.c | 7 ++++++-
> kernel/time/tick-broadcast.c | 1 +
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
> index b4d7306..5e22697 100644
> --- a/kernel/time/clockevents.c
> +++ b/kernel/time/clockevents.c
> @@ -94,6 +94,9 @@ static int __clockevents_switch_state(struct clock_event_device *dev,
> if (dev->features & CLOCK_EVT_FEAT_DUMMY)
> return 0;
>
> + /* On state transitions clear the forced flag unconditionally */
> + dev->next_event_forced = 0;
> +
> /* Transition with new state-specific callbacks */
> switch (state) {
> case CLOCK_EVT_STATE_DETACHED:
> @@ -366,8 +369,10 @@ int clockevents_program_event(struct clock_event_device *dev, ktime_t expires, b
> if (delta > (int64_t)dev->min_delta_ns) {
> delta = min(delta, (int64_t) dev->max_delta_ns);
> cycles = ((u64)delta * dev->mult) >> dev->shift;
> - if (!dev->set_next_event((unsigned long) cycles, dev))
> + if (!dev->set_next_event((unsigned long) cycles, dev)) {
> + dev->next_event_forced = 0;
> return 0;
> + }
> }
>
> if (dev->next_event_forced)
> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
> index 7e57fa3..115e0bf 100644
> --- a/kernel/time/tick-broadcast.c
> +++ b/kernel/time/tick-broadcast.c
> @@ -108,6 +108,7 @@ static struct clock_event_device *tick_get_oneshot_wakeup_device(int cpu)
>
> static void tick_oneshot_wakeup_handler(struct clock_event_device *wd)
> {
> + wd->next_event_forced = 0;
> /*
> * If we woke up early and the tick was reprogrammed in the
> * meantime then this may be spurious but harmless.
>