Re: [PATCH v2 07/11] x86/irq: Move irq stacks to percpu hot section
From: Brian Gerst
Date: Wed Feb 26 2025 - 19:10:59 EST
On Wed, Feb 26, 2025 at 3:25 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Wed, Feb 26, 2025 at 01:05:26PM -0500, Brian Gerst wrote:
>
> > diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
> > index 474af15ae017..2cd2064457b1 100644
> > --- a/arch/x86/kernel/irq.c
> > +++ b/arch/x86/kernel/irq.c
> > @@ -34,6 +34,11 @@ EXPORT_PER_CPU_SYMBOL(irq_stat);
> > DEFINE_PER_CPU_CACHE_HOT(u16, __softirq_pending);
> > EXPORT_PER_CPU_SYMBOL(__softirq_pending);
> >
> > +DEFINE_PER_CPU_CACHE_HOT(struct irq_stack *, hardirq_stack_ptr);
> > +#ifdef CONFIG_X86_64
> > +DEFINE_PER_CPU_CACHE_HOT(bool, hardirq_stack_inuse);
> > +#endif
> > +
> > atomic_t irq_err_count;
> >
> > /*
>
> Perhaps instead of the above #ifdef,...
>
> > diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
> > index dc1049c01f9b..48a27cde9635 100644
> > --- a/arch/x86/kernel/irq_32.c
> > +++ b/arch/x86/kernel/irq_32.c
> > @@ -52,6 +52,8 @@ static inline int check_stack_overflow(void) { return 0; }
> > static inline void print_stack_overflow(void) { }
> > #endif
> >
> > +DEFINE_PER_CPU_CACHE_HOT(struct irq_stack *, softirq_stack_ptr);
> > +
> > static void call_on_stack(void *func, void *stack)
> > {
> > asm volatile("xchgl %%ebx,%%esp \n"
>
> > diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
> > index 56bdeecd8ee0..4834e317e568 100644
> > --- a/arch/x86/kernel/irq_64.c
> > +++ b/arch/x86/kernel/irq_64.c
>
> stick it in this file, like you already did for the 32bit case?
I had it that way originally, but it wasn't packing efficiently before
I added SORT_BY_ALIGNMENT() to the linker script. I'll change it
back.
Brian Gerst