Re: [External] Re: [PATCH v3 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
From: Mike Kravetz
Date: Thu Nov 12 2020 - 19:39:04 EST
On 11/10/20 7:41 PM, Muchun Song wrote:
> On Wed, Nov 11, 2020 at 8:47 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
>>
>> On 11/8/20 6:10 AM, Muchun Song wrote:
>> I am reading the code incorrectly it does not appear page->lru (of the huge
>> page) is being used for this purpose. Is that correct?
>>
>> If it is correct, would using page->lru of the huge page make this code
>> simpler? I am just missing the reason why you are using
>> page_huge_pte(page)->lru
>
> For 1GB HugeTLB pages, we should pre-allocate more than one page
> table. So I use a linked list. The page_huge_pte(page) is the list head.
> Because the page->lru shares storage with page->pmd_huge_pte.
Sorry, but I do not understand the statement page->lru shares storage with
page->pmd_huge_pte. Are you saying they are both in head struct page of
the huge page?
Here is what I was suggesting. If we just use page->lru for the list
then vmemmap_pgtable_prealloc() could be coded like the following:
static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
{
struct page *pte_page, *t_page;
unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
if (!nr)
return 0;
/* Store preallocated pages on huge page lru list */
INIT_LIST_HEAD(&page->lru);
while (nr--) {
pte_t *pte_p;
pte_p = pte_alloc_one_kernel(&init_mm);
if (!pte_p)
goto out;
list_add(&virt_to_page(pte_p)->lru, &page->lru);
}
return 0;
out:
list_for_each_entry_safe(pte_page, t_page, &page->lru, lru)
pte_free_kernel(&init_mm, page_to_virt(pte_page));
return -ENOMEM;
}
By doing this we could eliminate the routines,
vmemmap_pgtable_init()
vmemmap_pgtable_deposit()
vmemmap_pgtable_withdraw()
and simply use the list manipulation routines.
To me, that looks simpler than the proposed code in this patch.
--
Mike Kravetz