Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA

From: Mike Kravetz
Date: Wed Oct 06 2021 - 14:28:29 EST


On 10/6/21 12:54 AM, Oscar Salvador wrote:
> On Tue, Oct 05, 2021 at 11:57:54AM -0700, Mike Kravetz wrote:
>> It is the smallest.
>>
>> CMA uses a per-region bit map to track allocations. When setting up the
>> region, you specify how many pages each bit represents. Currently,
>> only gigantic pages are allocated/freed from CMA so the region is set up
>> such that one bit represents a gigantic page size allocation.
>>
>> With demote, a gigantic page (allocation) could be split into smaller
>> size pages. And, these smaller size pages will be freed to CMA. So,
>> since the per-region bit map needs to be set up to represent the smallest
>> allocation/free size, it now needs to be set to the smallest huge page
>> size which can be freed to CMA.
>>
>> Unfortunately, we set up the CMA region for huge pages before we set up
>> huge pages sizes (hstates). So, technically we do not know the smallest
>> huge page size as this can change via command line options and
>> architecture specific code. Therefore, at region setup time we need some
>> constant value for smallest possible huge page size. That is why
>> HUGETLB_PAGE_ORDER is used.
>
> Do you know if that is done for a reason? Setting up CMA for hugetlb before
> initialiting hugetlb itself? Would not make more sense to do it the other way
> around?
>

One reason is that the initialization sequence is a bit messy. In most
cases, arch specific code sets up huge pages. So, we would need to make
sure this is done before the cma initialization. This might be
possible, but I am not confident in my abilities to understand/modify
and test early init code for all architectures supporting hugetlb cma
allocations.

In addition, not all architectures initialize their huge page sizes. It
is possible for architectures to only set up huge pages that have been
requested on the command line. In such cases, it would require some
fancy command line parsing to look for and process a hugetlb_cma argument
before any other hugetlb argument. Not even sure if this is possible.

The most reasonable way to address this would be to add an arch specific
callback asking for the smallest supported huge page size. I did not do
this here, as I am not sure this is really going to be an issue. In
the use case (and architecture) I know of, this is not an issue. As you
mention, this or something else could be added if the need arises.
--
Mike Kravetz

> The way I see it is that it is a bit unfortunate that we cannot only demote
> gigantic pages per se, so 1GB on x86_64 and 16G on arm64 with 64k page size.
>
> I guess nothing to be worried about now as this is an early stage, but maybe
> something to think about in the future in we case we want to allow for more
> flexibility.
>
>> I should probably add all that to the changelog for clarity?
>
> Yes, I think it would be great to have that as a context.
>
>> After your comment yesterday about rewriting this code for clarity, this
>> now becomes:
>>
>> /*
>> * Set demote order for each hstate. Note that
>> * h->demote_order is initially 0.
>> * - We can not demote gigantic pages if runtime freeing
>> * is not supported, so skip this.
>> * - If CMA allocation is possible, we can not demote
>> * HUGETLB_PAGE_ORDER or smaller size pages.
>> */
>> if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>> continue;
>> if (hugetlb_cma_size && h->order <= HUGETLB_PAGE_ORDER)
>> continue;
>> for_each_hstate(h2) {
>> if (h2 == h)
>> continue;
>> if (h2->order < h->order &&
>> h2->order > h->demote_order)
>> h->demote_order = h2->order;
>> }
>>
>> Hopefully, that is more clear.
>
> Defintiely, this looks better to me.
>