Re: [PATCH v4 3/8] hugetlb: perform vmemmap optimization on a list of pages
From: Muchun Song
Date: Tue Sep 19 2023 - 23:05:53 EST
> On Sep 20, 2023, at 04:49, Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
>
> On 09/19/23 11:10, Muchun Song wrote:
>>
>>
>> On 2023/9/19 07:01, Mike Kravetz wrote:
>>> When adding hugetlb pages to the pool, we first create a list of the
>>> allocated pages before adding to the pool. Pass this list of pages to a
>>> new routine hugetlb_vmemmap_optimize_folios() for vmemmap optimization.
>>>
>>> Due to significant differences in vmemmmap initialization for bootmem
>>> allocated hugetlb pages, a new routine prep_and_add_bootmem_folios
>>> is created.
>>>
>>> We also modify the routine vmemmap_should_optimize() to check for pages
>>> that are already optimized. There are code paths that might request
>>> vmemmap optimization twice and we want to make sure this is not
>>> attempted.
>>>
>>> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
>>> ---
>>> mm/hugetlb.c | 50 +++++++++++++++++++++++++++++++++++++-------
>>> mm/hugetlb_vmemmap.c | 11 ++++++++++
>>> mm/hugetlb_vmemmap.h | 5 +++++
>>> 3 files changed, 58 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 8624286be273..d6f3db3c1313 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -2269,6 +2269,11 @@ static void prep_and_add_allocated_folios(struct hstate *h,
>>> {
>>> struct folio *folio, *tmp_f;
>>> + /*
>>> + * Send list for bulk vmemmap optimization processing
>>> + */
>>
>> From the kernel development document, one-line comment format is "/**/".
>>
>
> Will change the comments introduced here.
BTW, there are some places as well, please updates all, thanks.
>
>>> + hugetlb_vmemmap_optimize_folios(h, folio_list);
>>> +
>>> /*
>>> * Add all new pool pages to free lists in one lock cycle
>>> */
>>> @@ -3309,6 +3314,40 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio,
>>> prep_compound_head((struct page *)folio, huge_page_order(h));
>>> }
>>> +static void __init prep_and_add_bootmem_folios(struct hstate *h,
>>> + struct list_head *folio_list)
>>> +{
>>> + struct folio *folio, *tmp_f;
>>> +
>>> + /*
>>> + * Send list for bulk vmemmap optimization processing
>>> + */
>>> + hugetlb_vmemmap_optimize_folios(h, folio_list);
>>> +
>>> + /*
>>> + * Add all new pool pages to free lists in one lock cycle
>>> + */
>>> + spin_lock_irq(&hugetlb_lock);
>>> + list_for_each_entry_safe(folio, tmp_f, folio_list, lru) {
>>> + if (!folio_test_hugetlb_vmemmap_optimized(folio)) {
>>> + /*
>>> + * If HVO fails, initialize all tail struct pages
>>> + * We do not worry about potential long lock hold
>>> + * time as this is early in boot and there should
>>> + * be no contention.
>>> + */
>>> + hugetlb_folio_init_tail_vmemmap(folio,
>>> + HUGETLB_VMEMMAP_RESERVE_PAGES,
>>> + pages_per_huge_page(h));
>>> + }
>>> + __prep_account_new_huge_page(h, folio_nid(folio));
>>> + enqueue_hugetlb_folio(h, folio);
>>> + }
>>> + spin_unlock_irq(&hugetlb_lock);
>>> +
>>> + INIT_LIST_HEAD(folio_list);
>>
>> I'm not sure what is the purpose of the reinitialization to list head?
>>
>
> There really is no purpose. This was copied from
> prep_and_add_allocated_folios which also has this unnecessary call. It is
> unnecessary as enqueue_hugetlb_folio() will do a list_move for each
> folio on the list. Therefore, at the end of the loop we KNOW the list
> is empty.
Right.
>
> I will remove here and in prep_and_add_allocated_folios.
Thanks.
>
> Thanks,
> --
> Mike Kravetz