Re: [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly

From: Balbir Singh
Date: Thu Aug 30 2018 - 10:05:46 EST


On Fri, Aug 24, 2018 at 03:25:44PM -0400, jglisse@xxxxxxxxxx wrote:
> From: Ralph Campbell <rcampbell@xxxxxxxxxx>
>
> Private ZONE_DEVICE pages use a special pte entry and thus are not
> present. Properly handle this case in map_pte(), it is already handled
> in check_pte(), the map_pte() part was lost in some rebase most probably.
>
> Without this patch the slow migration path can not migrate back private
> ZONE_DEVICE memory to regular memory. This was found after stress
> testing migration back to system memory. This ultimatly can lead the
> CPU to an infinite page fault loop on the special swap entry.
>
> Signed-off-by: Ralph Campbell <rcampbell@xxxxxxxxxx>
> Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> ---
> mm/page_vma_mapped.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index ae3c2a35d61b..1cf5b9bfb559 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -21,6 +21,15 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> if (!is_swap_pte(*pvmw->pte))
> return false;
> } else {
> + if (is_swap_pte(*pvmw->pte)) {
> + swp_entry_t entry;
> +
> + /* Handle un-addressable ZONE_DEVICE memory */
> + entry = pte_to_swp_entry(*pvmw->pte);
> + if (is_device_private_entry(entry))
> + return true;
> + }
> +

This happens just for !PVMW_SYNC && PVMW_MIGRATION? I presume this
is triggered via the remove_migration_pte() code path? Doesn't
returning true here imply that we've taken the ptl lock for the
pvmw?

Balbir