Re: [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages

From: Naoya Horiguchi
Date: Sun Feb 05 2017 - 22:39:30 EST


On Fri, Feb 03, 2017 at 03:59:30PM +0800, Yisheng Xie wrote:
> We had considered all of the non-lru pages as unmovable before commit
> bda807d44454 ("mm: migrate: support non-lru movable page migration"). But
> now some of non-lru pages like zsmalloc, virtio-balloon pages also become
> movable. So we can offline such blocks by using non-lru page migration.
>
> This patch straightforwardly adds non-lru migration code, which means
> adding non-lru related code to the functions which scan over pfn and
> collect pages to be migrated and isolate them before migration.
>
> Signed-off-by: Yisheng Xie <xieyisheng1@xxxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxxxx>
> Cc: Minchan Kim <minchan@xxxxxxxxxx>
> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx>
> Cc: Hanjun Guo <guohanjun@xxxxxxxxxx>
> Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Cc: Reza Arbab <arbab@xxxxxxxxxxxxxxxxxx>
> Cc: Taku Izumi <izumi.taku@xxxxxxxxxxxxxx>
> Cc: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> Cc: Xishi Qiu <qiuxishi@xxxxxxxxxx>
> ---
> mm/memory_hotplug.c | 28 +++++++++++++++++-----------
> mm/page_alloc.c | 8 ++++++--
> 2 files changed, 23 insertions(+), 13 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index ca2723d..ea1be08 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1516,10 +1516,10 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
> }
>
> /*
> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> - * and hugepages). We scan pfn because it's much easier than scanning over
> - * linked list. This function returns the pfn of the first found movable
> - * page if it's found, otherwise 0.
> + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
> + * non-lru movable pages and hugepages). We scan pfn because it's much
> + * easier than scanning over linked list. This function returns the pfn
> + * of the first found movable page if it's found, otherwise 0.
> */
> static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> {
> @@ -1530,6 +1530,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> page = pfn_to_page(pfn);
> if (PageLRU(page))
> return pfn;
> + if (__PageMovable(page))
> + return pfn;
> if (PageHuge(page)) {
> if (page_huge_active(page))
> return pfn;
> @@ -1606,21 +1608,25 @@ static struct page *new_node_page(struct page *page, unsigned long private,
> if (!get_page_unless_zero(page))
> continue;
> /*
> - * We can skip free pages. And we can only deal with pages on
> - * LRU.
> + * We can skip free pages. And we can deal with pages on
> + * LRU and non-lru movable pages.
> */
> - ret = isolate_lru_page(page);
> + if (PageLRU(page))
> + ret = isolate_lru_page(page);
> + else
> + ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
> if (!ret) { /* Success */
> put_page(page);
> list_add_tail(&page->lru, &source);
> move_pages--;
> - inc_node_page_state(page, NR_ISOLATED_ANON +
> - page_is_file_cache(page));
> + if (!__PageMovable(page))

If this check is identical with "if (PageLRU(page))" in this context,
PageLRU(page) looks better because you already add same "if" above.

Otherwise, looks good to me.

Thanks,
Naoya Horiguchi