Re: [RFC 1/6] mm/migrate_pages: separate huge page and normal pages migration
From: Zi Yan
Date: Wed Sep 21 2022 - 12:10:51 EST
On 21 Sep 2022, at 2:06, Huang Ying wrote:
> This is a preparation patch to batch the page unmapping and moving for
> the normal pages and THPs. Based on that we can batch the TLB
> shootdown during the page migration and make it possible to use some
> hardware accelerator for the page copying.
>
> In this patch the huge page (PageHuge()) and normal page and THP
> migration is separated in migrate_pages() to make it easy to change
> the normal page and THP migration implementation.
>
> Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx>
> Cc: Zi Yan <ziy@xxxxxxxxxx>
> Cc: Yang Shi <shy828301@xxxxxxxxx>
> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> Cc: Oscar Salvador <osalvador@xxxxxxx>
> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> ---
> mm/migrate.c | 73 +++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 64 insertions(+), 9 deletions(-)
Maybe it would be better to have two subroutines for hugetlb migration
and normal page migration respectively. migrate_pages() becomes very
large at this point.
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 571d8c9fd5bc..117134f1c6dc 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1414,6 +1414,66 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>
> trace_mm_migrate_pages_start(mode, reason);
>
> + for (pass = 0; pass < 10 && retry; pass++) {
> + retry = 0;
> +
> + list_for_each_entry_safe(page, page2, from, lru) {
> + nr_subpages = compound_nr(page);
> + cond_resched();
> +
> + if (!PageHuge(page))
> + continue;
> +
> + rc = unmap_and_move_huge_page(get_new_page,
> + put_new_page, private, page,
> + pass > 2, mode, reason,
> + &ret_pages);
> + /*
> + * The rules are:
> + * Success: hugetlb page will be put back
> + * -EAGAIN: stay on the from list
> + * -ENOMEM: stay on the from list
> + * -ENOSYS: stay on the from list
> + * Other errno: put on ret_pages list then splice to
> + * from list
> + */
> + switch(rc) {
> + case -ENOSYS:
> + /* Hugetlb migration is unsupported */
> + nr_failed++;
> + nr_failed_pages += nr_subpages;
> + list_move_tail(&page->lru, &ret_pages);
> + break;
> + case -ENOMEM:
> + /*
> + * When memory is low, don't bother to try to migrate
> + * other pages, just exit.
> + */
> + nr_failed++;
> + nr_failed_pages += nr_subpages + nr_retry_pages;
> + goto out;
> + case -EAGAIN:
> + retry++;
> + nr_retry_pages += nr_subpages;
> + break;
> + case MIGRATEPAGE_SUCCESS:
> + nr_succeeded += nr_subpages;
> + break;
> + default:
> + /*
> + * Permanent failure (-EBUSY, etc.):
> + * unlike -EAGAIN case, the failed page is
> + * removed from migration page list and not
> + * retried in the next outer loop.
> + */
> + nr_failed++;
> + nr_failed_pages += nr_subpages;
> + break;
> + }
> + }
> + }
> + nr_failed += retry;
> + retry = 1;
> thp_subpage_migration:
> for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
> retry = 0;
> @@ -1431,18 +1491,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> cond_resched();
>
> if (PageHuge(page))
> - rc = unmap_and_move_huge_page(get_new_page,
> - put_new_page, private, page,
> - pass > 2, mode, reason,
> - &ret_pages);
> - else
> - rc = unmap_and_move(get_new_page, put_new_page,
> + continue;
> +
> + rc = unmap_and_move(get_new_page, put_new_page,
> private, page, pass > 2, mode,
> reason, &ret_pages);
> /*
> * The rules are:
> - * Success: non hugetlb page will be freed, hugetlb
> - * page will be put back
> + * Success: page will be freed
> * -EAGAIN: stay on the from list
> * -ENOMEM: stay on the from list
> * -ENOSYS: stay on the from list
> @@ -1468,7 +1524,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> nr_thp_split++;
> break;
> }
> - /* Hugetlb migration is unsupported */
> } else if (!no_subpage_counting) {
> nr_failed++;
> }
> --
> 2.35.1
--
Best Regards,
Yan, Zi
Attachment:
signature.asc
Description: OpenPGP digital signature