[PATCH v2 5/6] mm: hugetlb: Move mem_cgroup_charge_hugetlb() earlier in allocation

From: Ackerley Tng via B4 Relay

Date: Wed May 06 2026 - 11:58:43 EST


From: Ackerley Tng <ackerleytng@xxxxxxxxxx>

Move mem_cgroup_charge_hugetlb() earlier in the folio allocation
process. This change draws a cleaner line between memcg charging and the
subsequent hugetlb-specific reservation logic for VMAs and subpools.

While it would be ideal to make all accounting and reservations perfectly
symmetric, mem_cgroup_charge_hugetlb() is a complex operation that cannot
be performed under the hugetlb_lock. Moving the charge to this earlier
point ensures that memcg charging is handled before the code begins
manipulating subpool and VMA-specific state. These two types of accounting
will be separated in a future patch.

If mem_cgroup_charge_hugetlb() fails, the code now branches to
out_subpool_put to ensure the folio is freed and the subpool references are
handled correctly.

Signed-off-by: Ackerley Tng <ackerleytng@xxxxxxxxxx>
---
mm/hugetlb.c | 31 ++++++++++++++++++-------------
1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 68c21305fc86a..4159b3565a9be 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2975,6 +2975,24 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,

spin_unlock_irq(&hugetlb_lock);

+ ret = mem_cgroup_charge_hugetlb(folio, gfp | __GFP_RETRY_MAYFAIL);
+ /*
+ * Unconditionally increment NR_HUGETLB here. If it turns out that
+ * mem_cgroup_charge_hugetlb failed, then immediately free the page and
+ * decrement NR_HUGETLB.
+ */
+ lruvec_stat_mod_folio(folio, NR_HUGETLB, pages_per_huge_page(h));
+
+ if (ret == -ENOMEM) {
+ free_huge_folio(folio);
+ /*
+ * Skip uncharging hugetlb_cgroup since the charges
+ * were committed to the folio and freeing the folio
+ * would have cleared those up.
+ */
+ goto out_subpool_put;
+ }
+
hugetlb_set_folio_subpool(folio, spool);

if (map_chg != MAP_CHG_ENFORCED) {
@@ -3002,19 +3020,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
}
}

- ret = mem_cgroup_charge_hugetlb(folio, gfp | __GFP_RETRY_MAYFAIL);
- /*
- * Unconditionally increment NR_HUGETLB here. If it turns out that
- * mem_cgroup_charge_hugetlb failed, then immediately free the page and
- * decrement NR_HUGETLB.
- */
- lruvec_stat_mod_folio(folio, NR_HUGETLB, pages_per_huge_page(h));
-
- if (ret == -ENOMEM) {
- free_huge_folio(folio);
- return ERR_PTR(-ENOMEM);
- }
-
return folio;

out_uncharge_cgroup:

--
2.54.0.545.g6539524ca2-goog