Re: [PATCH] mm: avoid livelock on !__GFP_FS allocations
From: Pekka Enberg
Date: Tue Oct 25 2011 - 04:08:18 EST
Hi Colin,
On Tue, Oct 25, 2011 at 9:39 AM, Colin Cross <ccross@xxxxxxxxxxx> wrote:
>>> Under the following conditions, __alloc_pages_slowpath can loop
>>> forever:
>>> gfp_mask & __GFP_WAIT is true
>>> gfp_mask & __GFP_FS is false
>>> reclaim and compaction make no progress
>>> order <= PAGE_ALLOC_COSTLY_ORDER
>>>
>>> These conditions happen very often during suspend and resume,
>>> when pm_restrict_gfp_mask() effectively converts all GFP_KERNEL
>>> allocations into __GFP_WAIT.
On Tue, Oct 25, 2011 at 12:40 AM, Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
>> Why does it do that? Why don't we fix the gfp mask instead?
On Tue, Oct 25, 2011 at 10:51 AM, Colin Cross <ccross@xxxxxxxxxxx> wrote:
> It disables __GFP_IO and __GFP_FS because the IO drivers may be suspended.
Sure but why doesn't it clear __GFP_WAIT too?
>>> The oom killer is not run because gfp_mask & __GFP_FS is false,
>>> but should_alloc_retry will always return true when order is less
>>> than PAGE_ALLOC_COSTLY_ORDER.
>>>
>>> Fix __alloc_pages_slowpath to skip retrying when oom killer is
>>> not allowed by the GFP flags, the same way it would skip if the
>>> oom killer was allowed but disabled.
>>>
>>> Signed-off-by: Colin Cross <ccross@xxxxxxxxxxx>
>>> ---
>>>
>>> An alternative patch would add a did_some_progress argument to
>>> __alloc_pages_may_oom, and remove the checks in
>>> __alloc_pages_slowpath that require knowledge of when
>>> __alloc_pages_may_oom chooses to run out_of_memory. If
>>> did_some_progress was still zero, it would goto nopage whether
>>> or not __alloc_pages_may_oom was actually called.
>>>
>>> mm/page_alloc.c | 4 ++++
>>> 1 files changed, 4 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index fef8dc3..dcd99b3 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -2193,6 +2193,10 @@ rebalance:
>>> }
>>>
>>> goto restart;
>>> + } else {
>>> + /* If we aren't going to try the OOM killer, give up */
>>> + if (!(gfp_mask & __GFP_NOFAIL))
>>> + goto nopage;
>>> }
>>> }
>>
>> I don't quite understand how __GFP_WAIT is involved here. Which path
>> is causing the infinite loop?
>
> GFP_KERNEL is __GFP_WAIT | __GFP_IO | __GFP_FS. Once driver suspend
> has started, gfp_allowed_mask is ~(__GFP_IO | GFP_FS), so any call to
> __alloc_pages_nodemask(GFP_KERNEL, ...) gets masked to effectively
> __alloc_pages_nodemask(__GFP_WAIT, ...).
>
> The loop is in __alloc_pages_slowpath, from the rebalance label to
> should_alloc_retry. Under the conditions I listed in the commit
> message, there is no path to the nopage label, because all the
> relevant "goto nopage" lines that would normally allow a GFP_KERNEL
> allocation to fail are inside a check for __GFP_FS.
Right. Please include that information in the changelog.
> Modifying the gfp_allowed_mask would not completely fix the issue, a
> GFP_NOIO allocation can meet the conditions outside of suspend.
> gfp_allowed_mask just makes the issue more likely, by converting
> GFP_KERNEL into GFP_NOIO.
Why would anyone want to combine __GFP_WAIT, __GFP_NOFAIL and
!__GFP_IO on purpose? What is it useful for?
As for your patch:
Acked-by: Pekka Enberg <penberg@xxxxxxxxxx>
but I'd love to hear why we shouldn't also fix the suspend gfp mask to
clear __GFP_WAIT and add a WARN_ON_ONCE to your new code path.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/