Re: [PATCH v10 2/8] mm/huge_memory: add two new (not yet used) functions for folio_split()
From: Zi Yan
Date: Mon Mar 10 2025 - 12:14:25 EST
On 7 Mar 2025, at 12:39, Zi Yan wrote:
> This is a preparation patch, both added functions are not used yet.
>
> The added __split_unmapped_folio() is able to split a folio with its
> mapping removed in two manners: 1) uniform split (the existing way), and
> 2) buddy allocator like (or non-uniform) split.
>
> The added __split_folio_to_order() can split a folio into any lower order.
> For uniform split, __split_unmapped_folio() calls it once to split the
> given folio to the new order. For buddy allocator like (non-uniform)
> split, __split_unmapped_folio() calls it (folio_order - new_order) times
> and each time splits the folio containing the given page to one lower
> order.
>
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> Cc: David Hildenbrand <david@xxxxxxxxxx>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> Cc: John Hubbard <jhubbard@xxxxxxxxxx>
> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
> Cc: Kirill A. Shuemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx>
> Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
> Cc: Yang Shi <yang@xxxxxxxxxxxxxxxxxxxxxx>
> Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
> Cc: Kairui Song <kasong@xxxxxxxxxxx>
> ---
> mm/huge_memory.c | 348 ++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 347 insertions(+), 1 deletion(-)
Hi Andrew,
The patch below should fix the issues discovered by Hugh. Please fold
it into this patch. Thank you for all the help.
From 22ced0e84e756a1084a1eb32d1de596ca10e3b3c Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy@xxxxxxxxxx>
Date: Mon, 10 Mar 2025 11:59:42 -0400
Subject: [PATCH] mm/huge_memory: unfreeze head folio after page cache entries
are updated
Otherwise others can grab the head folio and see stale page cache entries.
Data corruption can happen because of that.
Drop large EOF tail folios with the right number of refs to prevent memory
leak.
Reported-by: Hugh Dickins <hughd@xxxxxxxxxx>
Closes: https://lore.kernel.org/all/fcbadb7f-dd3e-21df-f9a7-2853b53183c4@xxxxxxxxxx/
Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
---
mm/huge_memory.c | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8a42150298de..f06508e4d242 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3573,17 +3573,18 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
}
/*
- * Unfreeze refcount first. Additional reference from
- * page cache.
+ * origin_folio should be kept frozon until page cache
+ * entries are updated with all the other after-split
+ * folios to prevent others seeing stale page cache
+ * entries.
*/
- folio_ref_unfreeze(release,
- 1 + ((!folio_test_anon(origin_folio) ||
- folio_test_swapcache(origin_folio)) ?
- folio_nr_pages(release) : 0));
-
if (release == origin_folio)
continue;
+ folio_ref_unfreeze(release, 1 +
+ ((mapping || swap_cache) ?
+ folio_nr_pages(release) : 0));
+
lru_add_page_tail(origin_folio, &release->page,
lruvec, list);
@@ -3595,7 +3596,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
folio_account_cleaned(release,
inode_to_wb(mapping->host));
__filemap_remove_folio(release, NULL);
- folio_put(release);
+ folio_put_refs(release, folio_nr_pages(release));
} else if (mapping) {
__xa_store(&mapping->i_pages,
release->index, release, 0);
@@ -3607,6 +3608,15 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
}
}
+ /*
+ * Unfreeze origin_folio only after all page cache entries, which used
+ * to point to it, have been updated with new folios. Otherwise,
+ * a parallel folio_try_get() can grab origin_folio and its caller can
+ * see stale page cache entries.
+ */
+ folio_ref_unfreeze(origin_folio, 1 +
+ ((mapping || swap_cache) ? folio_nr_pages(origin_folio) : 0));
+
unlock_page_lruvec(lruvec);
if (swap_cache)
--
2.47.2
Best Regards,
Yan, Zi