Re: [PATCH 0/2 v8] oom: capture unreclaimable slab info in oom message

From: Tetsuo Handa
Date: Thu Sep 28 2017 - 16:45:37 EST


Yang Shi wrote:
> On 9/28/17 12:57 PM, Tetsuo Handa wrote:
> > Yang Shi wrote:
> >> On 9/27/17 9:36 PM, Tetsuo Handa wrote:
> >>> On 2017/09/28 6:46, Yang Shi wrote:
> >>>> Changelog v7 -> v8:
> >>>> * Adopted Michal’s suggestion to dump unreclaim slab info when unreclaimable slabs amount > total user memory. Not only in oom panic path.
> >>>
> >>> Holding slab_mutex inside dump_unreclaimable_slab() was refrained since V2
> >>> because there are
> >>>
> >>> mutex_lock(&slab_mutex);
> >>> kmalloc(GFP_KERNEL);
> >>> mutex_unlock(&slab_mutex);
> >>>
> >>> users. If we call dump_unreclaimable_slab() for non OOM panic path, aren't we
> >>> introducing a risk of crash (i.e. kernel panic) for regular OOM path?
> >>
> >> I don't see the difference between regular oom path and oom path other
> >> than calling panic() at last.
> >>
> >> And, the slab dump may be called by panic path too, it is for both
> >> regular and panic path.
> >
> > Calling a function that might cause kerneloops immediately before calling panic()
> > would be tolerable, for the kernel will panic after all. But calling a function
> > that might cause kerneloops when there is no plan to call panic() is a bug.
>
> I got your point. slab_mutex is used to protect the list of all the
> slabs, since we are already in oom, there should be not kmem cache
> destroy happen during the list traverse. And, list_for_each_entry() has
> been replaced to list_for_each_entry_safe() to make the traverse more
> robust.

I consider that OOM event and kmem chache destroy event can run concurrently
because slab_mutex is not held by OOM event (and unfortunately cannot be held
due to possibility of deadlock) in order to protect the list of all the slabs.

I don't think replacing list_for_each_entry() with list_for_each_entry_safe()
makes the traverse more robust, for list_for_each_entry_safe() does not defer
freeing of memory used by list element. Rather, replacing list_for_each_entry()
with list_for_each_entry_rcu() (and making relevant changes such as
rcu_read_lock()/rcu_read_unlock()/synchronize_rcu()) will make the traverse safe.