[RFC PATCH 03/26] mm: make pageblock_order 2M per default
From: Johannes Weiner
Date: Tue Apr 18 2023 - 15:14:24 EST
pageblock_order can be of various sizes, depending on configuration,
but the default is MAX_ORDER-1. Given 4k pages, that comes out to
4M. This is a large chunk for the allocator/reclaim/compaction to try
to keep grouped per migratetype. It's also unnecessary as the majority
of higher order allocations - THP and slab - are smaller than that.
Before subsequent patches increase the effort that goes into
maintaining migratetype isolation, it's important to first set the
defrag block size to what's likely to have common consumers.
Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
---
include/linux/pageblock-flags.h | 4 ++--
mm/page_alloc.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h
index 5f1ae07d724b..05b6811f8cee 100644
--- a/include/linux/pageblock-flags.h
+++ b/include/linux/pageblock-flags.h
@@ -47,8 +47,8 @@ extern unsigned int pageblock_order;
#else /* CONFIG_HUGETLB_PAGE */
-/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
-#define pageblock_order (MAX_ORDER-1)
+/* Manage fragmentation at the 2M level */
+#define pageblock_order ilog2(2U << (20 - PAGE_SHIFT))
#endif /* CONFIG_HUGETLB_PAGE */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ac03571e0532..5e04a69f6a26 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7634,7 +7634,7 @@ static inline void setup_usemap(struct zone *zone) {}
/* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */
void __init set_pageblock_order(void)
{
- unsigned int order = MAX_ORDER - 1;
+ unsigned int order = ilog2(2U << (20 - PAGE_SHIFT));
/* Check that pageblock_nr_pages has not already been setup */
if (pageblock_order)
--
2.39.2