Re: [PATCH 1/9] slub: Make PREEMPT_RT support less convoluted
From: Vlastimil Babka
Date: Tue Aug 23 2022 - 14:52:12 EST
On 8/17/22 18:26, Sebastian Andrzej Siewior wrote:
> From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>
> The slub code already has a few helpers depending on PREEMPT_RT. Add a few
> more and get rid of the CONFIG_PREEMPT_RT conditionals all over the place.
>
> No functional change.
>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxx>
> Cc: David Rientjes <rientjes@xxxxxxxxxx>
> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> Cc: Pekka Enberg <penberg@xxxxxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> ---
> mm/slub.c | 66 +++++++++++++++++++++++++------------------------------
> 1 file changed, 30 insertions(+), 36 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 862dbd9af4f52..5f7c5b5bd49f9 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -100,9 +100,11 @@
> * except the stat counters. This is a percpu structure manipulated only by
> * the local cpu, so the lock protects against being preempted or interrupted
> * by an irq. Fast path operations rely on lockless operations instead.
> - * On PREEMPT_RT, the local lock does not actually disable irqs (and thus
> - * prevent the lockless operations), so fastpath operations also need to take
> - * the lock and are no longer lockless.
> + *
> + * On PREEMPT_RT, the local lock neither disables interrupts nor preemption
> + * which means the lockless fastpath cannot be used as it might interfere with
> + * an in-progress slow path operations. In this case the local lock is always
> + * taken but it still utilizes the freelist for the common operations.
> *
> * lockless fastpaths
> *
> @@ -163,8 +165,11 @@
> * function call even on !PREEMPT_RT, use inline preempt_disable() there.
> */
> #ifndef CONFIG_PREEMPT_RT
> -#define slub_get_cpu_ptr(var) get_cpu_ptr(var)
> -#define slub_put_cpu_ptr(var) put_cpu_ptr(var)
> +#define slub_get_cpu_ptr(var) get_cpu_ptr(var)
> +#define slub_put_cpu_ptr(var) put_cpu_ptr(var)
> +#define use_lockless_fast_path() (true)
> +#define slub_local_irq_save(flags) local_irq_save(flags)
> +#define slub_local_irq_restore(flags) local_irq_restore(flags)
Note these won't be neccessary anymore after
https://lore.kernel.org/linux-mm/20220823170400.26546-6-vbabka@xxxxxxx/T/#u
> #else
> #define slub_get_cpu_ptr(var) \
> ({ \
> @@ -176,6 +181,9 @@ do { \
> (void)(var); \
> migrate_enable(); \
> } while (0)
> +#define use_lockless_fast_path() (false)
> +#define slub_local_irq_save(flags) do { } while (0)
> +#define slub_local_irq_restore(flags) do { } while (0)
> #endif
>
> #ifdef CONFIG_SLUB_DEBUG
> @@ -460,16 +468,14 @@ static __always_inline void __slab_unlock(struct slab *slab)
>
> static __always_inline void slab_lock(struct slab *slab, unsigned long *flags)
> {
> - if (IS_ENABLED(CONFIG_PREEMPT_RT))
> - local_irq_save(*flags);
> + slub_local_irq_save(*flags);
> __slab_lock(slab);
> }
>
> static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags)
> {
> __slab_unlock(slab);
> - if (IS_ENABLED(CONFIG_PREEMPT_RT))
> - local_irq_restore(*flags);
> + slub_local_irq_restore(*flags);
> }
Ditto.
> /*
> @@ -482,7 +488,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab
> void *freelist_new, unsigned long counters_new,
> const char *n)
> {
> - if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> + if (use_lockless_fast_path())
> lockdep_assert_irqs_disabled();
This test would stay after the patch I referenced above. But while this
change will keep testing the technically correct thing, the name would be
IMHO misleading here, as this is semantically not about the lockless fast
path, but whether we need to have irqs disabled to avoid a deadlock due to
irq incoming when we hold the bit_spin_lock() and its handler trying to
acquire it as well.