Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page

From: Oscar Salvador
Date: Wed Dec 16 2020 - 08:06:59 EST


On Sun, Dec 13, 2020 at 11:45:26PM +0800, Muchun Song wrote:
> +
> +/*
> + * vmemmap_rmap_walk - walk vmemmap page table
> + *
> + * @rmap_pte: called for each non-empty PTE (lowest-level) entry.
> + * @reuse: the page which is reused for the tail vmemmap pages.
> + * @vmemmap_pages: the list head of the vmemmap pages that can be freed.
> + */
> +struct vmemmap_rmap_walk {
> + void (*rmap_pte)(pte_t *pte, unsigned long addr,
> + struct vmemmap_rmap_walk *walk);
> + struct page *reuse;
> + struct list_head *vmemmap_pages;
> +};

Why did you chose this approach in this version?
Earlier versions of this patchset had a single vmemmap_to_pmd() function
which returned the PMD, and now we have serveral vmemmap_{levels}_range
and a vmemmap_rmap_walk.
A brief explanation about why this change was introduced would have been nice.

I guess it is because ealier versions were too oriented for the usecase
this patchset presents, while the new versions tries to be more broad
about future re-uses of the interface?


--
Oscar Salvador
SUSE L3