Re: [PATCH 6/9] mm: oom_kill: simplify OOM killer locking

From: Tetsuo Handa
Date: Wed Apr 29 2015 - 01:48:28 EST


David Rientjes wrote:
> It's not vital and somewhat unrelated to your patch, but if we can't grab
> the mutex with the trylock in __alloc_pages_may_oom() then I think it
> would be more correct to do schedule_timeout_killable() rather than
> uninterruptible. I just mention it if you happen to go through another
> revision of the series and want to switch it at the same time.

It is a difficult choice. Killable sleep is a good thing if

(1) the OOM victim is current thread
(2) the OOM victim is waiting for current thread to release lock

but is a bad thing otherwise. And currently, (2) is not true because current
thread cannot access the memory reserves when current thread is blocking the
OOM victim. If fatal_signal_pending() threads can access portion of the memory
reserves (like I said

I don't like allowing only TIF_MEMDIE to get reserve access, for it can be
one of !TIF_MEMDIE threads which really need memory to safely terminate without
failing allocations from do_exit(). Rather, why not to discontinue TIF_MEMDIE
handling and allow getting access to private memory reserves for all
fatal_signal_pending() threads (i.e. replacing WMARK_OOM with WMARK_KILLED
in "[patch 09/12] mm: page_alloc: private memory reserves for OOM-killing
allocations") ?

at https://lkml.org/lkml/2015/3/27/378 ), (2) will become true.

Of course, the threads which the OOM victim is waiting for may not have
SIGKILL pending. WMARK_KILLED helps if the lock contention is happening
among threads sharing the same mm struct, does not help otherwise.

Well, what about introducing WMARK_OOM as a memory reserve which can be
accessed during atomic_read(&oom_victims) > 0? In this way, we can choose
next OOM victim upon reaching WMARK_OOM.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/