[PATCH v2 5/8] KVM: arm64: Steply write protect page table by mask bit
From: Keqian Zhu
Date: Thu Jul 02 2020 - 09:56:29 EST
During dirty log clear, page table entries are write protected
according to a mask. In the past we write protect all entries
corresponding to the mask from ffs to fls. Though there may be
zero bits between this range, we are holding the kvm mmu lock
so we won't write protect entries that we don't want to.
We are about to add support for hardware management of dirty state
to arm64, holding kvm mmu lock will be not enough. We should write
protect entries steply by mask bit.
Signed-off-by: Keqian Zhu <zhukeqian1@xxxxxxxxxx>
Signed-off-by: Peng Liang <liangpeng10@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d0c34549ef3b..adfa62f1fced 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1703,10 +1703,16 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
gfn_t gfn_offset, unsigned long mask)
{
phys_addr_t base_gfn = slot->base_gfn + gfn_offset;
- phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT;
- phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;
+ phys_addr_t start, end;
+ u32 i;
- stage2_wp_range(kvm, start, end);
+ for (i = __ffs(mask); i <= __fls(mask); i++) {
+ if (test_bit_le(i, &mask)) {
+ start = (base_gfn + i) << PAGE_SHIFT;
+ end = (base_gfn + i + 1) << PAGE_SHIFT;
+ stage2_wp_range(kvm, start, end);
+ }
+ }
}
/*
--
2.19.1