Re: [PATCH] mm/rmap.c: split huge pmd when it really is

From: Kirill A. Shutemov
Date: Mon Dec 23 2019 - 12:16:54 EST


On Mon, Dec 23, 2019 at 10:24:35AM +0800, Wei Yang wrote:
> There are two cases to call try_to_unmap_one() with TTU_SPLIT_HUGE_PMD
> set:
>
> * unmap_page()
> * shrink_page_list()
>
> In both case, the page passed to try_to_unmap_one() is PageHead() of the
> THP. If this page's mapping address in process is not HPAGE_PMD_SIZE
> aligned, this means the THP is not mapped as PMD THP in this process.
> This could happen when we do mremap() a PMD size range to an un-aligned
> address.
>
> Currently, this case is handled by following check in __split_huge_pmd()
> luckily.
>
> page != pmd_page(*pmd)
>
> This patch checks the address to skip some hard work.

Do you see some measurable performance improvement? rmap is heavy enough
and I expect this kind of overhead to be within noise level.

I don't have anything agains the check, but it complicates the picture.

And if we are going this path, it worth also check if the vma is long
enough to hold huge page.

And I don't see why the check cannot be done inside split_huge_pmd_address().

--
Kirill A. Shutemov