Re: [External] [PATCH 1/2] hugetlb: remove prep_compound_huge_page cleanup

From: Muchun Song
Date: Tue Jun 22 2021 - 05:09:57 EST


On Tue, Jun 22, 2021 at 10:15 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
>
> The routine prep_compound_huge_page is a simple wrapper to call either
> prep_compound_gigantic_page or prep_compound_page. However, it is only
> called from gather_bootmem_prealloc which only processes gigantic pages.
> Eliminate the routine and call prep_compound_gigantic_page directly.
>
> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>

Nice clean-up. Thanks.

Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>

> ---
> mm/hugetlb.c | 29 ++++++++++-------------------
> 1 file changed, 10 insertions(+), 19 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 760b5fb836b8..50596b7d6da9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1320,8 +1320,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
> }
>
> -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
> -static void prep_compound_gigantic_page(struct page *page, unsigned int order);
> #else /* !CONFIG_CONTIG_ALLOC */
> static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> int nid, nodemask_t *nodemask)
> @@ -2759,16 +2757,10 @@ int __alloc_bootmem_huge_page(struct hstate *h)
> return 1;
> }
>
> -static void __init prep_compound_huge_page(struct page *page,
> - unsigned int order)
> -{
> - if (unlikely(order > (MAX_ORDER - 1)))
> - prep_compound_gigantic_page(page, order);
> - else
> - prep_compound_page(page, order);
> -}
> -
> -/* Put bootmem huge pages into the standard lists after mem_map is up */
> +/*
> + * Put bootmem huge pages into the standard lists after mem_map is up.
> + * Note: This only applies to gigantic (order > MAX_ORDER) pages.
> + */
> static void __init gather_bootmem_prealloc(void)
> {
> struct huge_bootmem_page *m;
> @@ -2777,20 +2769,19 @@ static void __init gather_bootmem_prealloc(void)
> struct page *page = virt_to_page(m);
> struct hstate *h = m->hstate;
>
> + VM_BUG_ON(!hstate_is_gigantic(h));
> WARN_ON(page_count(page) != 1);
> - prep_compound_huge_page(page, huge_page_order(h));
> + prep_compound_gigantic_page(page, huge_page_order(h));
> WARN_ON(PageReserved(page));
> prep_new_huge_page(h, page, page_to_nid(page));
> put_page(page); /* free it into the hugepage allocator */
>
> /*
> - * If we had gigantic hugepages allocated at boot time, we need
> - * to restore the 'stolen' pages to totalram_pages in order to
> - * fix confusing memory reports from free(1) and another
> - * side-effects, like CommitLimit going negative.
> + * We need to restore the 'stolen' pages to totalram_pages
> + * in order to fix confusing memory reports from free(1) and
> + * other side-effects, like CommitLimit going negative.
> */
> - if (hstate_is_gigantic(h))
> - adjust_managed_page_count(page, pages_per_huge_page(h));
> + adjust_managed_page_count(page, pages_per_huge_page(h));
> cond_resched();
> }
> }
> --
> 2.31.1
>