[PATCH RFC 02/10] mm: Handle pmd entries in follow_pfn()
From: Joao Martins
Date: Fri Jan 10 2020 - 14:07:11 EST
When follow_pfn hits a pmd_huge() it won't return a valid PFN
given it's usage of follow_pte(). Fix that up to pass a @pmdpp
and thus allow callers to get the pmd pointer. If we encounter
such a huge page, we calculate the pfn offset to the PMD
accordingly.
This allows KVM to handle 2M hugepage pfns on VM_PFNMAP vmas.
Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx>
---
mm/memory.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index cfc3668bddeb..db99684d2cb3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4366,6 +4366,7 @@ EXPORT_SYMBOL(follow_pte_pmd);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn)
{
+ pmd_t *pmdpp = NULL;
int ret = -EINVAL;
spinlock_t *ptl;
pte_t *ptep;
@@ -4373,10 +4374,14 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address,
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
return ret;
- ret = follow_pte(vma->vm_mm, address, &ptep, &ptl);
+ ret = follow_pte_pmd(vma->vm_mm, address, NULL,
+ &ptep, &pmdpp, &ptl);
if (ret)
return ret;
- *pfn = pte_pfn(*ptep);
+ if (pmdpp)
+ *pfn = pmd_pfn(*pmdpp) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ else
+ *pfn = pte_pfn(*ptep);
pte_unmap_unlock(ptep, ptl);
return 0;
}
--
2.17.1