Re: [PATCH 6/9] mm: oom_kill: simplify OOM killer locking
From: David Rientjes
Date: Tue Apr 28 2015 - 18:43:43 EST
On Mon, 27 Apr 2015, Johannes Weiner wrote:
> The zonelist locking and the oom_sem are two overlapping locks that
> are used to serialize global OOM killing against different things.
> The historical zonelist locking serializes OOM kills from allocations
> with overlapping zonelists against each other to prevent killing more
> tasks than necessary in the same memory domain. Only when neither
> tasklists nor zonelists from two concurrent OOM kills overlap (tasks
> in separate memcgs bound to separate nodes) are OOM kills allowed to
> execute in parallel.
> The younger oom_sem is a read-write lock to serialize OOM killing
> against the PM code trying to disable the OOM killer altogether.
> However, the OOM killer is a fairly cold error path, there is really
> no reason to optimize for highly performant and concurrent OOM kills.
> And the oom_sem is just flat-out redundant.
> Replace both locking schemes with a single global mutex serializing
> OOM kills regardless of context.
> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> Acked-by: Michal Hocko <mhocko@xxxxxxx>
Acked-by: David Rientjes <rientjes@xxxxxxxxxx>
Thanks for doing this, it cleans up the code quite a bit and I think
there's the added benefit of not interleaving oom killer messages in the
kernel log, and that's important since it's the only way we can currently
discover that the kernel has killed something.
It's not vital and somewhat unrelated to your patch, but if we can't grab
the mutex with the trylock in __alloc_pages_may_oom() then I think it
would be more correct to do schedule_timeout_killable() rather than
uninterruptible. I just mention it if you happen to go through another
revision of the series and want to switch it at the same time.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/