Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings
From: David Hildenbrand
Date: Thu Aug 03 2023 - 10:12:57 EST
With this patch, you'll might suddenly have mapcount > refcount for a folio, or
am I wrong?
Yes you would. Does that break things?
It is problematic whenever you want to check for additional page
references that are not from mappings (i.e., GUP refs/pins or anything else)
One example lives in KSM code (!compound only):
page_mapcount(page) + 1 + swapped != page_count(page)
Another one in compaction code:
if (!mapping && (folio_ref_count(folio) - 1) > folio_mapcount(folio))
And another one in khugepaged (is_refcount_suitable)
... and in THP split can_split_folio() (although that can deal with
false positives and false negatives).
We want to avoid detecting "no other references" if there *are* other
references. Detecting "there are other references" although there are
not is usually better.
Assume you have mapcount > refcount for some time due to concurrent
unmapping, AND some unrelated reference. You would suddenly pass these
checks (mapcount == refcount) and might not detect other references.
+
+ for (i = 0; i < nr_pages;) {
+ ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm);
+ tlb_remove_tlb_entry(tlb, pte, addr);
+ zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent);
+ full = __tlb_remove_page(tlb, page, 0);
+
+ if (unlikely(page_mapcount(page) < 1))
+ print_bad_pte(vma, addr, ptent, page);
Can we avoid new users of page_mapcount() outside rmap code, please? :)
Sure. This is just trying to replicate the same diagnstics that's done on the
non-batched path. I'm happy to remove it.
Spotted it afterwards in the existing code already, so you're effetively
not adding new ones.
--
Cheers,
David / dhildenb