Re: [PATCH 2/2] mempolicy: do not try to queue pages from !vma_migratable()

From: Andrew Morton
Date: Mon Feb 01 2016 - 17:28:41 EST


On Mon, 1 Feb 2016 16:26:09 +0300 "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> wrote:

> Maybe I miss some point, but I don't see a reason why we try to queue
> pages from non migratable VMAs.
>
> The only case when we can queue pages from such VMA is MPOL_MF_STRICT
> plus MPOL_MF_MOVE or MPOL_MF_MOVE_ALL for VMA which has pages on LRU,
> but gfp mask is not sutable for migaration (see mapping_gfp_mask() check
> in vma_migratable()). That's looks like a bug to me.
>
> Let's filter out non-migratable vma at start of queue_pages_test_walk()
> and go to queue_pages_pte_range() only if MPOL_MF_MOVE or
> MPOL_MF_MOVE_ALL flag is set.

Conflicts with
http://ozlabs.org/~akpm/mmots/broken-out/mm-mempolicy-skip-vm_hugetlb-and-vm_mixedmap-vma-for-lazy-mbind.patch.
I resolved it thusly, please review:

--- a/mm/mempolicy.c~mempolicy-do-not-try-to-queue-pages-from-vma_migratable
+++ a/mm/mempolicy.c
@@ -548,8 +548,7 @@ retry:
goto retry;
}

- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
- migrate_page_add(page, qp->pagelist, flags);
+ migrate_page_add(page, qp->pagelist, flags);
}
pte_unmap_unlock(pte - 1, ptl);
cond_resched();
@@ -625,7 +624,7 @@ static int queue_pages_test_walk(unsigne
unsigned long endvma = vma->vm_end;
unsigned long flags = qp->flags;

- if (vma->vm_flags & VM_PFNMAP)
+ if (!vma_migratable(vma))
return 1;

if (endvma > end)
@@ -644,17 +643,15 @@ static int queue_pages_test_walk(unsigne

if (flags & MPOL_MF_LAZY) {
/* Similar to task_numa_work, skip inaccessible VMAs */
- if (vma_migratable(vma) && !is_vm_hugetlb_page(vma) &&
+ if (!is_vm_hugetlb_page(vma) &&
(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)) &&
!(vma->vm_flags & VM_MIXEDMAP))
change_prot_numa(vma, start, endvma);
return 1;
}

- if ((flags & MPOL_MF_STRICT) ||
- ((flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) &&
- vma_migratable(vma)))
- /* queue pages from current vma */
+ /* queue pages from current vma */
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
return 0;
return 1;
}
_