Re: [patch 00/12] mm: page_alloc: improve OOM mechanism and policy

From: Tetsuo Handa
Date: Sat Apr 11 2015 - 03:30:11 EST

Johannes Weiner wrote:
> The argument here was always that NOFS allocations are very limited in
> their reclaim powers and will trigger OOM prematurely. However, the
> way we limit dirty memory these days forces most cache to be clean at
> all times, and direct reclaim in general hasn't been allowed to issue
> page writeback for quite some time. So these days, NOFS reclaim isn't
> really weaker than regular direct reclaim. The only exception is that
> it might block writeback, so we'd go OOM if the only reclaimables left
> were dirty pages against that filesystem. That should be acceptable.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 47981c5e54c3..fe3cb2b0b85b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2367,16 +2367,6 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> /* The OOM killer does not needlessly kill tasks for lowmem */
> if (ac->high_zoneidx < ZONE_NORMAL)
> goto out;
> - /* The OOM killer does not compensate for IO-less reclaim */
> - if (!(gfp_mask & __GFP_FS)) {
> - /*
> - * XXX: Page reclaim didn't yield anything,
> - * and the OOM killer can't be invoked, but
> - * keep looping as per tradition.
> - */
> - *did_some_progress = 1;
> - goto out;
> - }
> if (pm_suspended_storage())
> goto out;
> /* The OOM killer may not free memory on a specific node */

I think this change will allow calling out_of_memory() which results in
"oom_kill_process() is trivially called via pagefault_out_of_memory()"
problem described in .

I myself think that we should trigger OOM killer for !__GFP_FS allocation
in order to make forward progress in case the OOM victim is blocked.
So, my question about this change is whether we can accept involving OOM
killer from page fault, no matter how trivially OOM killer will kill some
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at