Re: [patch -mm] mm, oom: remove oom_lock from exit_mmap
From: Michal Hocko
Date: Mon Jul 16 2018 - 07:15:13 EST
On Mon 16-07-18 19:38:21, Tetsuo Handa wrote:
> On 2018/07/16 16:44, Michal Hocko wrote:
> >> If setting MMF_OOM_SKIP is guarded by oom_lock, we can enforce
> >> last second allocation attempt like below.
> >>
> >> CPU 0 CPU 1
> >>
> >> mutex_trylock(&oom_lock) in __alloc_pages_may_oom() succeeds.
> >> get_page_from_freelist() fails.
> >> Enters out_of_memory().
> >>
> >> __oom_reap_task_mm() reclaims some memory.
> >> mutex_lock(&oom_lock);
> >>
> >> select_bad_process() does not select new victim because MMF_OOM_SKIP is not yet set.
> >> Leaves out_of_memory().
> >> mutex_unlock(&oom_lock) in __alloc_pages_may_oom() is called.
> >>
> >> Sets MMF_OOM_SKIP.
> >> mutex_unlock(&oom_lock);
> >>
> >> get_page_from_freelist() likely succeeds before reaching __alloc_pages_may_oom() again.
> >> Saved one OOM victim from being needlessly killed.
> >>
> >> That is, guarding setting MMF_OOM_SKIP works as if synchronize_rcu(); it waits for anybody
> >> who already acquired (or started waiting for) oom_lock to release oom_lock, in order to
> >> prevent select_bad_process() from needlessly selecting new OOM victim.
> >
> > Hmm, is this a practical problem though? Do we really need to have a
> > broader locking context just to defeat this race?
>
> Yes, for you think that select_bad_process() might take long time. It is possible
> that MMF_OOM_SKIP is set while the owner of oom_lock is preempted. It is not such
> a small window that select_bad_process() finds an mm which got MMF_OOM_SKIP
> immediately before examining that mm.
I only do care if the race is practical to hit. And that is why I would
like a simplification first (so drop the oom_lock in the oom_reaper
path) and then follow up with some decent justification on top.
--
Michal Hocko
SUSE Labs