Re: [PATCH] arm64: remove HAVE_CMPXCHG_LOCAL
From: K Prateek Nayak
Date: Tue Feb 17 2026 - 23:01:39 EST
Hello Catalin,
On 2/17/2026 10:18 PM, Catalin Marinas wrote:
> Yes, that would be good. It's the preempt_enable_notrace() path that
> ends up calling preempt_schedule_notrace() -> __schedule() pretty much
> unconditionally.
What do you mean by unconditionally? We always check
__preempt_count_dec_and_test() before calling into __schedule().
On x86, We use MSB of preempt_count to indicate a resched and
set_preempt_need_resched() would just clear this MSB.
If the preempt_count() turns 0, we immediately go into schedule
or or the next preempt_enable() -> __preempt_count_dec_and_test()
would see the entire preempt_count being clear and will call into
schedule.
The arm64 implementation seems to be doing something similar too
with a separate "ti->preempt.need_resched" bit which is part of
the "ti->preempt_count"'s union so it isn't really unconditional.
> Not sure what would go wrong but some simple change
> like this (can be done at a higher in the preempt macros to even avoid
> getting here):
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 854984967fe2..d9a5d6438303 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7119,7 +7119,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
> if (likely(!preemptible()))
> return;
>
> - do {
> + while (need_resched()) {
Essentially you are simply checking it twice now on entry since
need_resched() state would have already been communicated by
__preempt_count_dec_and_test().
> /*
> * Because the function tracer can trace preempt_count_sub()
> * and it also uses preempt_enable/disable_notrace(), if
> @@ -7146,7 +7146,7 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
>
> preempt_latency_stop(1);
> preempt_enable_no_resched_notrace();
> - } while (need_resched());
> + }
> }
> EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
--
Thanks and Regards,
Prateek