Re: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function

From: Michal Hocko
Date: Thu Jun 25 2020 - 08:05:57 EST


On Tue 23-06-20 15:13:45, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
>
> There are some similar functions for migration target allocation. Since
> there is no fundamental difference, it's better to keep just one rather
> than keeping all variants. This patch implements base migration target
> allocation function. In the following patches, variants will be converted
> to use this function.
>
> Note that PageHighmem() call in previous function is changed to open-code
> "is_highmem_idx()" since it provides more readability.

I was little bit surprised that alloc_migration_target still uses
private argument while it only accepts migration_target_control
structure pointer but then I have noticed that you are using it as a
real callback in a later patch.

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>

Few questions inline
[...]

> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 47b8ccb..820ea5e 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1648,9 +1648,13 @@ EXPORT_SYMBOL(unpoison_memory);
>
> static struct page *new_page(struct page *p, unsigned long private)
> {
> - int nid = page_to_nid(p);
> + struct migration_target_control mtc = {
> + .nid = page_to_nid(p),
> + .nmask = &node_states[N_MEMORY],

This could be .namsk = NULL, right? alloc_migration_target doesn't
modify the node mask and NULL nodemask is always interpreted as all
available nodes.

> + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> + };
>
> - return new_page_nodemask(p, nid, &node_states[N_MEMORY]);
> + return alloc_migration_target(p, (unsigned long)&mtc);
> }
>
[...]
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 634f1ea..3afff59 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1536,29 +1536,34 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> return rc;
> }
>
> -struct page *new_page_nodemask(struct page *page,
> - int preferred_nid, nodemask_t *nodemask)
> +struct page *alloc_migration_target(struct page *page, unsigned long private)
> {
> - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL;
> + struct migration_target_control *mtc;
> + gfp_t gfp_mask;
> unsigned int order = 0;
> struct page *new_page = NULL;
> + int zidx;
> +
> + mtc = (struct migration_target_control *)private;
> + gfp_mask = mtc->gfp_mask;
>
> if (PageHuge(page)) {
> return alloc_huge_page_nodemask(
> - page_hstate(compound_head(page)),
> - preferred_nid, nodemask, 0, false);
> + page_hstate(compound_head(page)), mtc->nid,
> + mtc->nmask, gfp_mask, false);
> }
>
> if (PageTransHuge(page)) {
> + gfp_mask &= ~__GFP_RECLAIM;

What's up with this gfp_mask modification?

> gfp_mask |= GFP_TRANSHUGE;
> order = HPAGE_PMD_ORDER;
> }
> -
> - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE))
> + zidx = zone_idx(page_zone(page));
> + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
> gfp_mask |= __GFP_HIGHMEM;
>
> new_page = __alloc_pages_nodemask(gfp_mask, order,
> - preferred_nid, nodemask);
> + mtc->nid, mtc->nmask);
>
> if (new_page && PageTransHuge(new_page))
> prep_transhuge_page(new_page);
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index aec26d9..adba031 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -309,7 +309,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>
> struct page *alloc_migrate_target(struct page *page, unsigned long private)
> {
> - int nid = page_to_nid(page);
> + struct migration_target_control mtc = {
> + .nid = page_to_nid(page),
> + .nmask = &node_states[N_MEMORY],

nmask = NULL again

> + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> + };
>
> - return new_page_nodemask(page, nid, &node_states[N_MEMORY]);
> + return alloc_migration_target(page, (unsigned long)&mtc);
> }
> --
> 2.7.4

--
Michal Hocko
SUSE Labs