Re: [PATCH 3/4] mm/hugetlb: make hugetlb migration callback CMA aware

From: Michal Hocko
Date: Wed Jul 15 2020 - 04:33:56 EST


On Wed 15-07-20 14:05:28, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
>
> new_non_cma_page() in gup.c requires to allocate the new page that is not
> on the CMA area. new_non_cma_page() implements it by using allocation
> scope APIs.
>
> However, there is a work-around for hugetlb. Normal hugetlb page
> allocation API for migration is alloc_huge_page_nodemask(). It consists
> of two steps. First is dequeing from the pool. Second is, if there is no
> available page on the queue, allocating by using the page allocator.
>
> new_non_cma_page() can't use this API since first step (deque) isn't
> aware of scope API to exclude CMA area. So, new_non_cma_page() exports
> hugetlb internal function for the second step, alloc_migrate_huge_page(),
> to global scope and uses it directly. This is suboptimal since hugetlb
> pages on the queue cannot be utilized.
>
> This patch tries to fix this situation by making the deque function on
> hugetlb CMA aware. In the deque function, CMA memory is skipped if
> PF_MEMALLOC_NOCMA flag is found.

Now that this is in sync with the global case I do not have any
objections.

> Acked-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>

Acked-by: Michal Hocko <mhocko@xxxxxxxx>

Minor nit below

[...]
> @@ -1036,10 +1037,16 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
> static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
> {
> struct page *page;
> + bool nocma = !!(READ_ONCE(current->flags) & PF_MEMALLOC_NOCMA);

READ_ONCE is not really needed because current->flags are always set on
the current so no race is possible.

> +
> + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
> + if (nocma && is_migrate_cma_page(page))
> + continue;
>
> - list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
> if (!PageHWPoison(page))
> break;
> + }
> +
> /*
> * if 'non-isolated free hugepage' not found on the list,
> * the allocation fails.
> @@ -1928,7 +1935,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
> return page;
> }
>
> -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
> +static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
> int nid, nodemask_t *nmask)
> {
> struct page *page;
> --
> 2.7.4

--
Michal Hocko
SUSE Labs