Re: Infinite looping observed in __offline_pages
From: Mike Kravetz
Date: Wed Aug 22 2018 - 15:01:21 EST
On 08/22/2018 02:30 AM, Aneesh Kumar K.V wrote:
> commit 2e9d754ac211f2af3731f15df3cd8cd070b4cc54
> Author: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>
> Date: Tue Aug 21 14:17:55 2018 +0530
>
> mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.
>
> When scanning for movable pages, filter out Hugetlb pages if hugepage migration
> is not supported. Without this we hit infinte loop in __offline pages where we
> do
> pfn = scan_movable_pages(start_pfn, end_pfn);
> if (pfn) { /* We have movable pages */
> ret = do_migrate_range(pfn, end_pfn);
> goto repeat;
> }
>
> We do support hugetlb migration ony if the hugetlb pages are at pmd level. Here
I thought migration at pgd level was added for POWER? commit 94310cbcaa3c
(mm/madvise: enable (soft|hard) offline of HugeTLB pages at PGD level).
Only remember, because I did not fully understand the use case. :)
> we just check for Kernel config. The gigantic page size check is done in
> page_huge_active.
>
> Reported-by: Haren Myneni <haren@xxxxxxxxxxxxxxxxxx>
> CC: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 4eb6e824a80c..f9bdea685cf4 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1338,7 +1338,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> return pfn;
> if (__PageMovable(page))
> return pfn;
> - if (PageHuge(page)) {
> + if (IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION) &&
> + PageHuge(page)) {
How about using hugepage_migration_supported instead? It would automatically
catch those non-migratable huge page sizes. Something like:
if (PageHuge(page) &&
hugepage_migration_supported(page_hstate(page))) {
--
Mike Kravetz
> if (page_huge_active(page))
> return pfn;
> else
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 15ea511fb41c..a3f81e18c882 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7649,6 +7649,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> * handle each tail page individually in migration.
> */
> if (PageHuge(page)) {
> +
> + if (!IS_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION))
> + goto unmovable;
> +
> iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
> continue;
> }
>