Re: [RFC PATCH 3/3] hugetlbfs: don't retry when pool page allocations start to fail

From: Mike Kravetz
Date: Thu Jul 25 2019 - 17:11:42 EST


On 7/25/19 1:13 AM, Mel Gorman wrote:
> On Wed, Jul 24, 2019 at 10:50:14AM -0700, Mike Kravetz wrote:
>> When allocating hugetlbfs pool pages via /proc/sys/vm/nr_hugepages,
>> the pages will be interleaved between all nodes of the system. If
>> nodes are not equal, it is quite possible for one node to fill up
>> before the others. When this happens, the code still attempts to
>> allocate pages from the full node. This results in calls to direct
>> reclaim and compaction which slow things down considerably.
>>
>> When allocating pool pages, note the state of the previous allocation
>> for each node. If previous allocation failed, do not use the
>> aggressive retry algorithm on successive attempts. The allocation
>> will still succeed if there is memory available, but it will not try
>> as hard to free up memory.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
>
> set_max_huge_pages can fail the NODEMASK_ALLOC() alloc which you handle
> *but* in the event of an allocation failure this bug can silently recur.
> An informational message might be justified in that case in case the
> stall should recur with no hint as to why.

Right.
Perhaps a NODEMASK_ALLOC() failure should just result in a quick exit/error.
If we can't allocate a node mask, it is unlikely we will be able to allocate
a/any huge pages. And, the system must be extremely low on memory and there
are likely other bigger issues.

There have been discussions elsewhere about discontinuing the use of
NODEMASK_ALLOC() and just putting the mask on the stack. That may be
acceptable here as well.

> Technically passing NULL into
> NODEMASK_FREE is also safe as kfree (if used for that kernel config) can
> handle freeing of a NULL pointer. However, that is cosmetic more than
> anything. Whether you decide to change either or not;

Yes.
I will clean up with an updated series after more feedback.

>
> Acked-by: Mel Gorman <mgorman@xxxxxxx>
>

Thanks!
--
Mike Kravetz