Re: [PATCH RFC] mm: slub: fix flush_cpu_slab()/__free_slab() invocations in task context.
From: Sebastian Andrzej Siewior
Date: Tue Aug 30 2022 - 06:24:42 EST
On 2022-08-29 17:48:05 [+0200], Maurizio Lombardi wrote:
> diff --git a/mm/slub.c b/mm/slub.c
> index 862dbd9af4f5..d46ee90651d2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2681,30 +2681,34 @@ struct slub_flush_work {
> bool skip;
> };
>
> +static void flush_cpu_slab(void *d)
> +{
> + struct kmem_cache *s = d;
> + struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
> +
> + if (c->slab)
> + flush_slab(s, c);
> +
> + unfreeze_partials(s);
> +}
…
> @@ -2721,13 +2725,18 @@ static void flush_all_cpus_locked(struct kmem_cache *s)
> lockdep_assert_cpus_held();
> mutex_lock(&flush_lock);
>
> + if (in_task()) {
> + on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1);
This blocks with disabled preemption until it completes flush_cpu_slab()
on all CPUs. That function acquires a local_lock_t which can not be
acquired from in-IRQ which is where this function will be invoked due to
on_each_cpu_cond().
Couldn't we instead use a workqueue with that WQ_MEM_RECLAIM bit? It may
reclaim memory after all ;)
Sebastian