Re: [PATCH v3 10/13] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()

From: Qi Zheng
Date: Sun Aug 06 2023 - 22:22:04 EST




On 2023/8/6 11:55, Hugh Dickins wrote:
On Thu, 3 Aug 2023, Qi Zheng wrote:
On 2023/7/12 12:42, Hugh Dickins wrote:
Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp().
It does need mmap_read_lock(), but it does not need mmap_write_lock(),
nor vma_start_write() nor i_mmap lock nor anon_vma lock. All racing
paths are relying on pte_offset_map_lock() and pmd_lock(), so use those.
...
@@ -1681,47 +1634,76 @@ int collapse_pte_mapped_thp(struct mm_struct *mm,
unsigned long addr,
if (pte_none(ptent))
continue;
- page = vm_normal_page(vma, addr, ptent);
- if (WARN_ON_ONCE(page && is_zone_device_page(page)))
+ /*
+ * We dropped ptl after the first scan, to do the
mmu_notifier:
+ * page lock stops more PTEs of the hpage being faulted in,
but
+ * does not stop write faults COWing anon copies from existing
+ * PTEs; and does not stop those being swapped out or
migrated.
+ */
+ if (!pte_present(ptent)) {
+ result = SCAN_PTE_NON_PRESENT;
goto abort;
+ }
+ page = vm_normal_page(vma, addr, ptent);
+ if (hpage + i != page)
+ goto abort;
+
+ /*
+ * Must clear entry, or a racing truncate may re-remove it.
+ * TLB flush can be left until pmdp_collapse_flush() does it.
+ * PTE dirty? Shmem page is already dirty; file is read-only.
+ */
+ pte_clear(mm, addr, pte);

This is not non-present PTE entry, so we should call ptep_clear() to let
page_table_check track the PTE clearing operation, right? Otherwise it
may lead to false positives?

You are right: thanks a lot for catching that: fix patch follows.

With fix patch:

Reviewed-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>

Thanks.


Hugh