On 2019/4/10 23:23, Suzuki K Poulose wrote:
If we are checking whether the stage2 can map PAGE_SIZE,We can do a comment clean up as well in this patch.
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.
Cc: Christoffer Dall <christoffer.dall@xxxxxxx>
Cc: Marc Zyngier <marc.zyngier@xxxxxxx>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@xxxxxxx>
---
 virt/kvm/arm/mmu.c | 4 ++++
 1 file changed, 4 insertions(+)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
ÂÂÂÂÂ hva_t uaddr_start, uaddr_end;
ÂÂÂÂÂ size_t size;
+ÂÂÂ /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
+ÂÂÂ if (map_size == PAGE_SIZE)
+ÂÂÂÂÂÂÂ return true;
+
ÂÂÂÂÂ size = memslot->npages * PAGE_SIZE;
ÂÂÂÂÂ gpa_start = memslot->base_gfn << PAGE_SHIFT;
s/<< PAGE_SIZE/<< PAGE_SHIFT/