[tip: timers/urgent] clockevents: Add missing resets of the next_event_forced flag

From: tip-bot2 for Thomas Gleixner

Date: Thu Apr 16 2026 - 15:26:44 EST


The following commit has been merged into the timers/urgent branch of tip:

Commit-ID: 4096fd0e8eaea13ebe5206700b33f49635ae18e5
Gitweb: https://git.kernel.org/tip/4096fd0e8eaea13ebe5206700b33f49635ae18e5
Author: Thomas Gleixner <tglx@xxxxxxxxxx>
AuthorDate: Tue, 14 Apr 2026 22:55:01 +02:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxx>
CommitterDate: Thu, 16 Apr 2026 21:22:04 +02:00

clockevents: Add missing resets of the next_event_forced flag

The prevention mechanism against timer interrupt starvation missed to reset
the next_event_forced flag in a couple of places:

- When the clock event state changes. That can cause the flag to be
stale over a shutdown/startup sequence

- When a non-forced event is armed, which then prevents rearming before
that event. If that event is far out in the future this will cause
missed timer interrupts.

- In the suspend wakeup handler.

That led to stalls which have been reported by several people.

Add the missing resets, which fixes the problems for the reporters.

Fixes: d6e152d905bd ("clockevents: Prevent timer interrupt starvation")
Reported-by: Hanabishi <i.r.e.c.c.a.k.u.n+kernel.org@xxxxxxxxx>
Reported-by: Eric Naim <dnaim@xxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxx>
Tested-by: Hanabishi <i.r.e.c.c.a.k.u.n+kernel.org@xxxxxxxxx>
Tested-by: Eric Naim <dnaim@xxxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Closes: https://lore.kernel.org/68d1e9ac-2780-4be3-8ee3-0788062dd3a4@xxxxxxxxx
Link: https://patch.msgid.link/87340xfeje.ffs@tglx
---
kernel/time/clockevents.c | 7 ++++++-
kernel/time/tick-broadcast.c | 1 +
2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index b4d7306..5e22697 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -94,6 +94,9 @@ static int __clockevents_switch_state(struct clock_event_device *dev,
if (dev->features & CLOCK_EVT_FEAT_DUMMY)
return 0;

+ /* On state transitions clear the forced flag unconditionally */
+ dev->next_event_forced = 0;
+
/* Transition with new state-specific callbacks */
switch (state) {
case CLOCK_EVT_STATE_DETACHED:
@@ -366,8 +369,10 @@ int clockevents_program_event(struct clock_event_device *dev, ktime_t expires, b
if (delta > (int64_t)dev->min_delta_ns) {
delta = min(delta, (int64_t) dev->max_delta_ns);
cycles = ((u64)delta * dev->mult) >> dev->shift;
- if (!dev->set_next_event((unsigned long) cycles, dev))
+ if (!dev->set_next_event((unsigned long) cycles, dev)) {
+ dev->next_event_forced = 0;
return 0;
+ }
}

if (dev->next_event_forced)
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 7e57fa3..115e0bf 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -108,6 +108,7 @@ static struct clock_event_device *tick_get_oneshot_wakeup_device(int cpu)

static void tick_oneshot_wakeup_handler(struct clock_event_device *wd)
{
+ wd->next_event_forced = 0;
/*
* If we woke up early and the tick was reprogrammed in the
* meantime then this may be spurious but harmless.