Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA

From: Oscar Salvador
Date: Tue Oct 05 2021 - 05:33:26 EST


On Fri, Oct 01, 2021 at 10:52:08AM -0700, Mike Kravetz wrote:
> When huge page demotion is fully implemented, gigantic pages can be
> demoted to a smaller huge page size. For example, on x86 a 1G page
> can be demoted to 512 2M pages. However, gigantic pages can potentially
> be allocated from CMA. If a gigantic page which was allocated from CMA
> is demoted, the corresponding demoted pages needs to be returned to CMA.
>
> Use the new interface cma_pages_valid() to determine if a non-gigantic
> hugetlb page should be freed to CMA. Also, clear mapping field of these
> pages as expected by cma_release.
>
> This also requires a change to CMA reservations for gigantic pages.
> Currently, the 'order_per_bit' is set to the gigantic page size.
> However, if gigantic pages can be demoted this needs to be set to the
> order of the smallest huge page. At CMA reservation time we do not know

to the smallest, or to the next smaller? Would you mind elaborating why?

> @@ -3003,7 +3020,8 @@ static void __init hugetlb_init_hstates(void)
> * is not supported.
> */
> if (!hstate_is_gigantic(h) ||
> - gigantic_page_runtime_supported()) {
> + gigantic_page_runtime_supported() ||
> + !hugetlb_cma_size || !(h->order <= HUGETLB_PAGE_ORDER)) {

I am bit lost in the CMA area, so bear with me.
We do not allow to demote if we specify we want hugetlb pages from the CMA?
Also, can h->order be smaller than HUGETLB_PAGE_ORDER? I though
HUGETLB_PAGE_ORDER was the smallest one.

The check for HUGETLB_PAGE_ORDER can probably be squashed into patch#1.


> for_each_hstate(h2) {
> if (h2 == h)
> continue;
> @@ -3555,6 +3573,8 @@ static ssize_t demote_size_store(struct kobject *kobj,
> if (!t_hstate)
> return -EINVAL;
> demote_order = t_hstate->order;
> + if (demote_order < HUGETLB_PAGE_ORDER)
> + return -EINVAL;

This could probably go in the first patch.


--
Oscar Salvador
SUSE Labs