[PATCH v1 4/6] mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()

From: David Hildenbrand
Date: Wed Sep 13 2023 - 08:52:32 EST


If swapin code would ever decide to not use order-0 pages and supply a
PTE-mapped large folio, we will have to change how we call
__folio_set_anon() -- eventually with exclusive=false and an adjusted
address. For now, let's add a VM_WARN_ON_FOLIO() with a comment about the
situation.

Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/rmap.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index 1ac5bd1b8169..489c142d073b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1238,6 +1238,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,

if (unlikely(!folio_test_anon(folio))) {
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ /*
+ * For a PTE-mapped large folio, we only know that the single
+ * PTE is exclusive. Further, __folio_set_anon() might not get
+ * folio->index right when not given the address of the head
+ * page.
+ */
+ VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio);
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
} else if (likely(!folio_test_ksm(folio))) {
--
2.41.0