Re: [PATCH v1 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_lock()

From: Jann Horn
Date: Thu Oct 17 2024 - 14:01:20 EST


On Thu, Oct 17, 2024 at 11:47 AM Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> wrote:
> In retract_page_tables(), we may modify the pmd entry after acquiring the
> pml and ptl, so we should also check whether the pmd entry is stable.
> Using pte_offset_map_lock() to do it, and then we can also remove the
> calling of the pte_lockptr().
>
> Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
> ---
> mm/khugepaged.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 94feb85ce996c..b4f49d323c8d9 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1721,6 +1721,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> spinlock_t *pml;
> spinlock_t *ptl;
> bool skipped_uffd = false;
> + pte_t *pte;
>
> /*
> * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> @@ -1757,9 +1758,15 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> mmu_notifier_invalidate_range_start(&range);
>
> pml = pmd_lock(mm, pmd);
> - ptl = pte_lockptr(mm, pmd);
> + pte = pte_offset_map_lock(mm, pmd, addr, &ptl);

This takes the lock "ptl" on the success path...

> + if (!pte) {
> + spin_unlock(pml);
> + mmu_notifier_invalidate_range_end(&range);
> + continue;
> + }
> if (ptl != pml)
> spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);

... and this takes the same lock again, right? I think this will
deadlock on kernels with CONFIG_SPLIT_PTE_PTLOCKS=y. Did you test this
on a machine with less than 4 CPU cores, or something like that? Or am
I missing something?