Re: [RFC PATCH] mm: hugetlb: remove __GFP_THISNODE flag when dissolving the old hugetlb

From: Baolin Wang
Date: Fri Feb 02 2024 - 04:30:04 EST




On 2/2/2024 4:17 PM, Michal Hocko wrote:
On Fri 02-02-24 09:35:58, Baolin Wang wrote:


On 2/1/2024 11:27 PM, Michal Hocko wrote:
On Thu 01-02-24 21:31:13, Baolin Wang wrote:
Since commit 369fa227c219 ("mm: make alloc_contig_range handle free
hugetlb pages"), the alloc_contig_range() can handle free hugetlb pages
by allocating a new fresh hugepage, and replacing the old one in the
free hugepage pool.

However, our customers can still see the failure of alloc_contig_range()
when seeing a free hugetlb page. The reason is that, there are few memory
on the old hugetlb page's node, and it can not allocate a fresh hugetlb
page on the old hugetlb page's node in isolate_or_dissolve_huge_page() with
setting __GFP_THISNODE flag. This makes sense to some degree.

Later, the commit ae37c7ff79f1 (" mm: make alloc_contig_range handle
in-use hugetlb pages") handles the in-use hugetlb pages by isolating it
and doing migration in __alloc_contig_migrate_range(), but it can allow
fallbacking to other numa node when allocating a new hugetlb in
alloc_migration_target().

This introduces inconsistency to handling free and in-use hugetlb.
Considering the CMA allocation and memory hotplug relying on the
alloc_contig_range() are important in some scenarios, as well as keeping
the consistent hugetlb handling, we should remove the __GFP_THISNODE flag
in isolate_or_dissolve_huge_page() to allow fallbacking to other numa node,
which can solve the failure of alloc_contig_range() in our case.

I do agree that the inconsistency is not really good but I am not sure
dropping __GFP_THISNODE is the right way forward. Breaking pre-allocated
per-node pools might result in unexpected failures when node bound
workloads doesn't get what is asssumed available. Keep in mind that our
user APIs allow to pre-allocate per-node pools separately.

Yes, I agree, that is also what I concered. But sometimes users don't care
about the distribution of per-node hugetlb, instead they are more concerned
about the success of cma allocation or memory hotplug.

Yes, sometimes the exact per-node distribution is not really important.
But the kernel has no way of knowing that right now. And we have to make
a conservative guess here.
The in-use hugetlb is a very similar case. While having a temporarily
misplaced page doesn't really look terrible once that hugetlb page is
released back into the pool we are back to the case above. Either we
make sure that the node affinity is restored later on or it shouldn't be
migrated to a different node at all.

Agree. So how about below changing?
(1) disallow fallbacking to other nodes when handing in-use hugetlb, which
can ensure consistent behavior in handling hugetlb.

I can see two cases here. alloc_contig_range which is an internal kernel
user and then we have memory offlining. The former shouldn't break the
per-node hugetlb pool reservations, the latter might not have any other
choice (the whole node could get offline and that resembles breaking cpu
affininty if the cpu is gone).

IMO, not always true for memory offlining, when handling a free hugetlb, it disallows fallbacking, which is inconsistent.

Not only memory offlining, but also the longterm pinning (in migrate_longterm_unpinnable_pages()) and memory failure (in soft_offline_in_use_page()) can also break the per-node hugetlb pool reservations.

Now I can see how a hugetlb page sitting inside a CMA region breaks CMA
users expectations but hugetlb migration already tries hard to allocate
a replacement hugetlb so the system must be under a heavy memory
pressure if that fails, right? Is it possible that the hugetlb
reservation is just overshooted here? Maybe the memory is just terribly
fragmented though?

Could you be more specific about numbers in your failure case?

Sure. Our customer's machine contains serveral numa nodes, and the system reserves a large number of CMA memory occupied 50% of the total memory which is used for the virtual machine, meanwhile it also reserves lots of hugetlb which can occupy 50% of the CMA. So before starting the virtual machine, the hugetlb can use 50% of the CMA, but when starting the virtual machine, the CMA will be used by the virtual machine and the hugetlb should be migrated from CMA.

Due to several nodes in the system, one node's memory can be exhausted, which will fail the hugetlb migration with __GFP_THISNODE flag.

(2) introduce a new sysctl (may be named as "hugetlb_allow_fallback_nodes")
for users to control to allow fallbacking, that can solve the CMA or memory
hotplug failures that users are more concerned about.

I do not think this is a good idea. The policy might be different on
each node and this would get messy pretty quickly. If anything we could
try to detect a dedicated per node pool allocation instead. It is quite
likely that if admin preallocates pool without any memory policy then
the exact distribution of pages doesn't play a huge role.

I also agree. Now I think the policy is already messy when handing hugetlb migration:

1. CMA allocation: can or can not break the per-node hugetlb pool reservations.
1.1 handling free hugetlb: can not break per-node hugetlb pool reservations.
1.2 handling in-use hugetlb: can break per-node hugetlb pool reservations.
2. memory offlining: can or can not break per-node hugetlb pool reservations.
2.1 handling free hugetlb: can not break
2.2 handling in-use hugetlb: can break
3. longterm pinning: can break per-node hugetlb pool reservations.
4. memory soft-offline: can break per-node hugetlb pool reservations.

What a messy policy. And now we have no documentation to describe this messy policy. So we need to make things more clear when handling hugetlb migration with proper documantation.