[PATCH v1 2/3] mm/memory_hotplug: don't shuffle complete zone when onlining memory
From: David Hildenbrand
Date: Tue Jun 16 2020 - 07:52:38 EST
Commit e900a918b098 ("mm: shuffle initial free memory to improve
memory-side-cache utilization") introduced shuffling of free pages
during system boot and whenever we online memory blocks.
However, whenever we online memory blocks, all pages that will be
exposed to the buddy end up getting freed via __free_one_page(). In the
general case, we free these pages in MAX_ORDER - 1 chunks, which
corresponds to the shuffle order.
Inside __free_one_page(), we will already shuffle the newly onlined pages
using "to_tail = shuffle_pick_tail();". Drop explicit zone shuffling on
memory hotplug.
Note: When hotplugging a DIMM, each memory block (e.g., 128MB .. 2G on
x86-64) will get onlined individually, resulting in a shuffle_zone() for
every memory block getting onlined.
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Kees Cook <keescook@xxxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/memory_hotplug.c | 3 ---
mm/shuffle.c | 2 +-
mm/shuffle.h | 12 ------------
3 files changed, 1 insertion(+), 16 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9b34e03e730a4..845a517649c71 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -40,7 +40,6 @@
#include <asm/tlbflush.h>
#include "internal.h"
-#include "shuffle.h"
/*
* online_page_callback contains pointer to current page onlining function.
@@ -822,8 +821,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
zone->zone_pgdat->node_present_pages += onlined_pages;
pgdat_resize_unlock(zone->zone_pgdat, &flags);
- shuffle_zone(zone);
-
node_states_set_node(nid, &arg);
if (need_zonelists_rebuild)
build_all_zonelists(NULL);
diff --git a/mm/shuffle.c b/mm/shuffle.c
index dd13ab851b3ee..609c26aa57db0 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -180,7 +180,7 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat)
struct zone *z;
for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++)
- shuffle_zone(z);
+ __shuffle_zone(z);
}
bool shuffle_pick_tail(void)
diff --git a/mm/shuffle.h b/mm/shuffle.h
index 4d79f03b6658f..657e2b9ec38dd 100644
--- a/mm/shuffle.h
+++ b/mm/shuffle.h
@@ -30,14 +30,6 @@ static inline void shuffle_free_memory(pg_data_t *pgdat)
__shuffle_free_memory(pgdat);
}
-extern void __shuffle_zone(struct zone *z);
-static inline void shuffle_zone(struct zone *z)
-{
- if (!static_branch_unlikely(&page_alloc_shuffle_key))
- return;
- __shuffle_zone(z);
-}
-
static inline bool is_shuffle_order(int order)
{
if (!static_branch_unlikely(&page_alloc_shuffle_key))
@@ -54,10 +46,6 @@ static inline void shuffle_free_memory(pg_data_t *pgdat)
{
}
-static inline void shuffle_zone(struct zone *z)
-{
-}
-
static inline void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
{
}
--
2.26.2