Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma

From: Michal Hocko
Date: Tue Apr 07 2020 - 03:03:37 EST


On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
[...]
My ack still applies but I have only noticed two minor things now.

[...]
> @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> set_page_refcounted(page);
> if (hstate_is_gigantic(h)) {
> + /*
> + * Temporarily drop the hugetlb_lock, because
> + * we might block in free_gigantic_page().
> + */
> + spin_unlock(&hugetlb_lock);
> destroy_compound_gigantic_page(page, huge_page_order(h));
> free_gigantic_page(page, huge_page_order(h));
> + spin_lock(&hugetlb_lock);

This is OK with the current code because existing paths do not have to
revalidate the state AFAICS but it is a bit subtle. I have checked the
cma_free path and it can only sleep on the cma->lock unless I am missing
something. This lock is only used for cma bitmap manipulation and the
mutex sounds like an overkill there and it can be replaced by a
spinlock.

Sounds like a follow up patch material to me.

[...]
> + for_each_node_state(nid, N_ONLINE) {
> + int res;
> +
> + size = min(per_node, hugetlb_cma_size - reserved);
> + size = round_up(size, PAGE_SIZE << order);
> +
> + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> + 0, false, "hugetlb",
> + &hugetlb_cma[nid], nid);
> + if (res) {
> + pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> + res, nid);
> + break;

Do we really have to break out after a single node failure? There might
be other nodes that can satisfy the allocation. You are not cleaning up
previous allocations so there is a partial state and then it would make
more sense to me to simply s@break@continue@ here.

> + }
> +
> + reserved += size;
> + pr_info("hugetlb_cma: reserved %lu MiB on node %d\n",
> + size / SZ_1M, nid);
> +
> + if (reserved >= hugetlb_cma_size)
> + break;
> + }
> +}
--
Michal Hocko
SUSE Labs