Re: [RFC PATCH v1 1/7] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio()
From: Joshua Hahn
Date: Wed Feb 25 2026 - 15:29:22 EST
On Wed, 11 Feb 2026 16:37:12 -0800 Ackerley Tng <ackerleytng@xxxxxxxxxx> wrote:
> Previously, gbl_chg was passed from alloc_hugetlb_folio() into
> dequeue_hugetlb_folio_vma(), leaking the concept of gbl_chg into
> dequeue_hugetlb_folio_vma().
>
> This patch consolidates the interpretation of gbl_chg into
> alloc_hugetlb_folio(), also renaming dequeue_hugetlb_folio_vma() to
> dequeue_hugetlb_folio() so dequeue_hugetlb_folio() can just focus on
> dequeuing a folio.
>
> No functional change intended.
>
> Signed-off-by: Ackerley Tng <ackerleytng@xxxxxxxxxx>
> Reviewed-by: James Houghton <jthoughton@xxxxxxxxxx>
Makes sense to me, this seems like a reasonable semantic change even
without factoring out hugetlb_alloc_folio. Thank you!
Reviewed-by: Joshua Hahn <joshua.hahnjy@xxxxxxxxx>
> ---
> mm/hugetlb.c | 24 +++++++++---------------
> 1 file changed, 9 insertions(+), 15 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a1832da0f6236..fd067bd394ee0 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1380,7 +1380,7 @@ static unsigned long available_huge_pages(struct hstate *h)
>
> static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
> struct vm_area_struct *vma,
> - unsigned long address, long gbl_chg)
> + unsigned long address)
> {
> struct folio *folio = NULL;
> struct mempolicy *mpol;
> @@ -1388,13 +1388,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
> nodemask_t *nodemask;
> int nid;
>
> - /*
> - * gbl_chg==1 means the allocation requires a new page that was not
> - * reserved before. Making sure there's at least one free page.
> - */
> - if (gbl_chg && !available_huge_pages(h))
> - goto err;
> -
> gfp_mask = htlb_alloc_mask(h);
> nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
>
> @@ -1412,9 +1405,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>
> mpol_cond_put(mpol);
> return folio;
> -
> -err:
> - return NULL;
> }
>
> #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
> @@ -2962,12 +2952,16 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> goto out_uncharge_cgroup_reservation;
>
> spin_lock_irq(&hugetlb_lock);
> +
> /*
> - * glb_chg is passed to indicate whether or not a page must be taken
> - * from the global free pool (global change). gbl_chg == 0 indicates
> - * a reservation exists for the allocation.
> + * gbl_chg == 0 indicates a reservation exists for the allocation - so
> + * try dequeuing a page. If there are available_huge_pages(), try using
> + * them!
> */
> - folio = dequeue_hugetlb_folio_vma(h, vma, addr, gbl_chg);
> + folio = NULL;
> + if (!gbl_chg || available_huge_pages(h))
> + folio = dequeue_hugetlb_folio_vma(h, vma, addr);
> +
> if (!folio) {
> spin_unlock_irq(&hugetlb_lock);
> folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
> --
> 2.53.0.310.g728cabbaf7-goog
>
>