Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant

From: Dan Williams
Date: Fri Jun 19 2020 - 21:41:36 EST


On Fri, Jun 19, 2020 at 6:00 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> It's not completely obvious why we have to shuffle the complete zone, as
> some sort of shuffling is already performed when onlining pages via
> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
> of the freelist. Let's document why we have to shuffle the complete zone
> when exposing larger, contiguous physical memory areas to the buddy.
>

How about?

Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
memory-side-cache utilization")

...just like Patch1 since that original commit was missing the proper
commentary in the code?


> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxx>
> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
> ---
> mm/memory_hotplug.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9b34e03e730a4..a0d81d404823d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -822,6 +822,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
> zone->zone_pgdat->node_present_pages += onlined_pages;
> pgdat_resize_unlock(zone->zone_pgdat, &flags);
>
> + /*
> + * When exposing larger, physically contiguous memory areas to the
> + * buddy, shuffling in the buddy (when freeing onlined pages, putting
> + * them either to the head or the tail of the freelist) is only helpful
> + * for mainining the shuffle, but not for creating the initial shuffle.

s/mainining/maintaining/

> + * Shuffle the whole zone to make sure the just onlined pages are
> + * properly distributed across the whole freelist.
> + */
> shuffle_zone(zone);
>
> node_states_set_node(nid, &arg);

Other than the above minor fixups you can add:

Acked-by: Dan Williams <dan.j.williams@xxxxxxxxx>