[PATCH 4.9 09/66] KVM: arm/arm64: Check pagesize when allocating a hugepage at Stage 2

From: Greg Kroah-Hartman
Date: Mon Jan 29 2018 - 08:06:02 EST


4.9-stable review patch. If anyone has any objections, please let me know.

------------------

From: Punit Agrawal <punit.agrawal@xxxxxxx>

commit c507babf10ead4d5c8cca704539b170752a8ac84 upstream.

KVM only supports PMD hugepages at stage 2 but doesn't actually check
that the provided hugepage memory pagesize is PMD_SIZE before populating
stage 2 entries.

In cases where the backing hugepage size is smaller than PMD_SIZE (such
as when using contiguous hugepages), KVM can end up creating stage 2
mappings that extend beyond the supplied memory.

Fix this by checking for the pagesize of userspace vma before creating
PMD hugepage at stage 2.

Fixes: 66b3923a1a0f77a ("arm64: hugetlb: add support for PTE contiguous bit")
Signed-off-by: Punit Agrawal <punit.agrawal@xxxxxxx>
Cc: Marc Zyngier <marc.zyngier@xxxxxxx>
Reviewed-by: Christoffer Dall <christoffer.dall@xxxxxxxxxx>
Signed-off-by: Christoffer Dall <christoffer.dall@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
arch/arm/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1284,7 +1284,7 @@ static int user_mem_abort(struct kvm_vcp
return -EFAULT;
}

- if (is_vm_hugetlb_page(vma) && !logging_active) {
+ if (vma_kernel_pagesize(vma) && !logging_active) {
hugetlb = true;
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
} else {