[PATCH 5.8 041/177] KVM: arm64: Update page shift if stage 2 block mapping not supported

From: Greg Kroah-Hartman
Date: Tue Sep 15 2020 - 19:37:29 EST


From: Alexandru Elisei <alexandru.elisei@xxxxxxx>

[ Upstream commit 7b75cd5128421c673153efb1236705696a1a9812 ]

Commit 196f878a7ac2e (" KVM: arm/arm64: Signal SIGBUS when stage2 discovers
hwpoison memory") modifies user_mem_abort() to send a SIGBUS signal when
the fault IPA maps to a hwpoisoned page. Commit 1559b7583ff6 ("KVM:
arm/arm64: Re-check VMA on detecting a poisoned page") changed
kvm_send_hwpoison_signal() to use the page shift instead of the VMA because
at that point the code had already released the mmap lock, which means
userspace could have modified the VMA.

If userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to
map the guest fault IPA using block mappings in stage 2. That is not always
possible, if, for example, userspace uses dirty page logging for the VM.
Update the page shift appropriately in those cases when we downgrade the
stage 2 entry from a block mapping to a page.

Fixes: 1559b7583ff6 ("KVM: arm/arm64: Re-check VMA on detecting a poisoned page")
Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx>
Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
Reviewed-by: Gavin Shan <gshan@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20200901133357.52640-2-alexandru.elisei@xxxxxxx
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index bd47f06739d6c..d1320f3f2b137 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1873,6 +1873,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
!fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
force_pte = true;
vma_pagesize = PAGE_SIZE;
+ vma_shift = PAGE_SHIFT;
}

/*
--
2.25.1