Re: [PATCH 1/9] slub: Make PREEMPT_RT support less convoluted

From: Sebastian Andrzej Siewior
Date: Wed Aug 24 2022 - 09:26:16 EST


On 2022-08-23 19:15:43 [+0200], Vlastimil Babka wrote:
> > +#define slub_local_irq_save(flags) local_irq_save(flags)
> > +#define slub_local_irq_restore(flags) local_irq_restore(flags)
>
> Note these won't be neccessary anymore after
> https://lore.kernel.org/linux-mm/20220823170400.26546-6-vbabka@xxxxxxx/T/#u

Okay, let me postpone that one and rebase what is left on top.

> > @@ -482,7 +488,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab
> > void *freelist_new, unsigned long counters_new,
> > const char *n)
> > {
> > - if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> > + if (use_lockless_fast_path())
> > lockdep_assert_irqs_disabled();
>
> This test would stay after the patch I referenced above. But while this
> change will keep testing the technically correct thing, the name would be
> IMHO misleading here, as this is semantically not about the lockless fast
> path, but whether we need to have irqs disabled to avoid a deadlock due to
> irq incoming when we hold the bit_spin_lock() and its handler trying to
> acquire it as well.

Color me confused. Memory is never allocated in-IRQ context on
PREEMPT_RT. Therefore I don't understand why interrupts must be disabled
for the fast path (unless that comment only applied to !RT).

It could be about preemption since spinlock, local_lock don't disable
preemption and so another allocation on the same CPU is possible. But
then you say "we hold the bit_spin_lock()" and this one disables
preemption. This means nothing can stop the bit_spin_lock() owner from
making progress and since there is no memory allocation in-IRQ, we can't
block on the same bit_spin_lock() on the same CPU.

Sebastian