[PATCH v4 3/3] mm/compaction: optimize >0 order folio compaction with free page split.
From: Zi Yan
Date: Mon Feb 12 2024 - 11:41:36 EST
From: Zi Yan <ziy@xxxxxxxxxx>
During migration in a memory compaction, free pages are placed in an array
of page lists based on their order. But the desired free page order
(i.e., the order of a source page) might not be always present, thus
leading to migration failures and premature compaction termination. Split
a high order free pages when source migration page has a lower order to
increase migration successful rate.
Note: merging free pages when a migration fails and a lower order free
page is returned via compaction_free() is possible, but there is too much
work. Since the free pages are not buddy pages, it is hard to identify
these free pages using existing PFN-based page merging algorithm.
Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
Reviewed-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Tested-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Adam Manzanares <a.manzanares@xxxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Kemeng Shi <shikemeng@xxxxxxxxxxxxxxx>
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Luis Chamberlain <mcgrof@xxxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Yin Fengwei <fengwei.yin@xxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
---
mm/compaction.c | 36 +++++++++++++++++++++++++++++++-----
1 file changed, 31 insertions(+), 5 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index d0a05a621b67..25908e36b97c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1832,15 +1832,41 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
struct compact_control *cc = (struct compact_control *)data;
struct folio *dst;
int order = folio_order(src);
+ bool has_isolated_pages = false;
+ int start_order;
+ struct page *freepage;
+ unsigned long size;
+
+again:
+ for (start_order = order; start_order < NR_PAGE_ORDERS; start_order++)
+ if (!list_empty(&cc->freepages[start_order]))
+ break;
- if (list_empty(&cc->freepages[order])) {
- isolate_freepages(cc);
- if (list_empty(&cc->freepages[order]))
+ /* no free pages in the list */
+ if (start_order == NR_PAGE_ORDERS) {
+ if (!has_isolated_pages) {
+ isolate_freepages(cc);
+ has_isolated_pages = true;
+ goto again;
+ } else
return NULL;
}
- dst = list_first_entry(&cc->freepages[order], struct folio, lru);
- list_del(&dst->lru);
+ freepage = list_first_entry(&cc->freepages[start_order], struct page,
+ lru);
+ size = 1 << start_order;
+
+ list_del(&freepage->lru);
+
+ while (start_order > order) {
+ start_order--;
+ size >>= 1;
+
+ list_add(&freepage[size].lru, &cc->freepages[start_order]);
+ set_page_private(&freepage[size], start_order);
+ }
+ dst = (struct folio *)freepage;
+
post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
if (order)
prep_compound_page(&dst->page, order);
--
2.43.0