Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks

From: Michal Hocko
Date: Mon Oct 22 2018 - 09:43:21 EST


On Mon 22-10-18 22:20:36, Tetsuo Handa wrote:
> On 2018/10/22 21:03, Michal Hocko wrote:
> > On Mon 22-10-18 20:45:17, Tetsuo Handa wrote:
> >> On 2018/10/22 16:13, Michal Hocko wrote:
> >>> From: Michal Hocko <mhocko@xxxxxxxx>
> >>>
> >>> Tetsuo has reported [1] that a single process group memcg might easily
> >>> swamp the log with no-eligible oom victim reports due to race between
> >>> the memcg charge and oom_reaper
> >>>
> >>> Thread 1 Thread2 oom_reaper
> >>> try_charge try_charge
> >>> mem_cgroup_out_of_memory
> >>> mutex_lock(oom_lock)
> >>> mem_cgroup_out_of_memory
> >>> mutex_lock(oom_lock)
> >>> out_of_memory
> >>> select_bad_process
> >>> oom_kill_process(current)
> >>> wake_oom_reaper
> >>> oom_reap_task
> >>> MMF_OOM_SKIP->victim
> >>> mutex_unlock(oom_lock)
> >>> out_of_memory
> >>> select_bad_process # no task
> >>>
> >>> If Thread1 didn't race it would bail out from try_charge and force the
> >>> charge. We can achieve the same by checking tsk_is_oom_victim inside
> >>> the oom_lock and therefore close the race.
> >>>
> >>> [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@xxxxxxxxxxxxxxxxxxx
> >>> Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
> >>> ---
> >>> mm/memcontrol.c | 14 +++++++++++++-
> >>> 1 file changed, 13 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>> index e79cb59552d9..a9dfed29967b 100644
> >>> --- a/mm/memcontrol.c
> >>> +++ b/mm/memcontrol.c
> >>> @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
> >>> .gfp_mask = gfp_mask,
> >>> .order = order,
> >>> };
> >>> - bool ret;
> >>> + bool ret = true;
> >>>
> >>> mutex_lock(&oom_lock);
> >>> +
> >>> + /*
> >>> + * multi-threaded tasks might race with oom_reaper and gain
> >>> + * MMF_OOM_SKIP before reaching out_of_memory which can lead
> >>> + * to out_of_memory failure if the task is the last one in
> >>> + * memcg which would be a false possitive failure reported
> >>> + */
> >>> + if (tsk_is_oom_victim(current))
> >>> + goto unlock;
> >>> +
> >>
> >> This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock)
> >> so that any killed threads no longer wait for oom_lock.
> >
> > tsk_is_oom_victim is stronger because it doesn't depend on
> > fatal_signal_pending which might be cleared throughout the exit process.
>
> I mean:
>
> mm/memcontrol.c | 3 +-
> mm/oom_kill.c | 111 +++++---------------------------------------------------
> 2 files changed, 12 insertions(+), 102 deletions(-)

This is much larger change than I feel comfortable with to plug this
specific issue. A simple and easy to understand fix which doesn't add
maintenance burden should be preferred in general.

The code reduction looks attractive but considering it is based on
removing one of the heuristics to prevent OOM reports in some case it
should be done on its own with a careful and throughout justification.
E.g. how often is the heuristic really helpful.

In principle I do not oppose to remove the shortcut after all due
diligence is done because this particular one had given us quite a lot
headaches in the past.
--
Michal Hocko
SUSE Labs