Re: [PATCH 06/10] arm64: entry: Don't preempt with SError or Debug masked

From: Jinjie Ruan

Date: Tue Apr 07 2026 - 21:50:26 EST




On 2026/4/7 21:16, Mark Rutland wrote:
> On arm64, involuntary kernel preemption has been subtly broken since the
> move to the generic irqentry code. When preemption occurs, the new task
> may run with SError and Debug exceptions masked unexpectedly, leading to
> a loss of RAS events, breakpoints, watchpoints, and single-step
> exceptions.
>
> Prior to moving to the generic irqentry code, involuntary preemption of
> kernel mode would only occur when returning from regular interrupts, in
> a state where interrupts were masked and all other arm64-specific
> exceptions (SError, Debug, and pseudo-NMI) were unmasked. This is the
> only state in which it is valid to switch tasks.
>
> As part of moving to the generic irqentry code, the involuntary
> preemption logic was moved such that involuntary preemption could occur
> when returning from any (non-NMI) exception. As most exception handlers
> mask all arm64-specific exceptions before this point, preemption could
> occur in a state where arm64-specific exceptions were masked. This is
> not a valid state to switch tasks, and resulted in the loss of
> exceptions described above.
>
> As a temporary bodge, avoid the loss of exceptions by avoiding
> involuntary preemption when SError and/or Debug exceptions are masked.
> Practically speaking this means that involuntary preemption will only
> occur when returning from regular interrupts, as was the case before
> moving to the generic irqentry code.
>
> Fixes: 99eb057ccd67 ("arm64: entry: Move arm64_preempt_schedule_irq() into __exit_to_kernel_mode()")
> Reported-by: Ada Couprie Diaz <ada.coupriediaz@xxxxxxx>
> Reported-by: Vladimir Murzin <vladimir.murzin@xxxxxxx>
> Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
> Cc: Andy Lutomirski <luto@xxxxxxxxxx>
> Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
> Cc: Jinjie Ruan <ruanjinjie@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxx>
> Cc: Will Deacon <will@xxxxxxxxxx>
> ---
> arch/arm64/include/asm/entry-common.h | 21 +++++++++++++--------
> 1 file changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
> index cab8cd78f6938..20f0a7c7bde15 100644
> --- a/arch/arm64/include/asm/entry-common.h
> +++ b/arch/arm64/include/asm/entry-common.h
> @@ -29,14 +29,19 @@ static __always_inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
>
> static inline bool arch_irqentry_exit_need_resched(void)
> {
> - /*
> - * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
> - * priority masking is used the GIC irqchip driver will clear DAIF.IF
> - * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
> - * DAIF we must have handled an NMI, so skip preemption.
> - */
> - if (system_uses_irq_prio_masking() && read_sysreg(daif))
> - return false;
> + if (system_uses_irq_prio_masking()) {
> + /*
> + * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
> + * priority masking is used the GIC irqchip driver will clear DAIF.IF
> + * using gic_arch_enable_irqs() for normal IRQs. If anything is set in
> + * DAIF we must have handled an NMI, so skip preemption.
> + */
> + if (read_sysreg(daif))
> + return false;
> + } else {
> + if (read_sysreg(daif) & (PSR_D_BIT | PSR_A_BIT))
> + return false;

Reviewed-by: Jinjie Ruan <ruanjinjie@xxxxxxxxxx>

> + }
>
> /*
> * Preempting a task from an IRQ means we leave copies of PSTATE