[PATCH] KVM: x86/mmu: Set mmio_value to '0' if reserved #PF can't be generated
From: Sean Christopherson
Date: Wed May 27 2020 - 04:49:13 EST
Set the mmio_value to '0' instead of simply clearing the present bit to
squash a benign warning in kvm_mmu_set_mmio_spte_mask() that complains
about the mmio_value overlapping the lower GFN mask on systems with 52
bits of PA space.
Opportunistically clean up the code and comments.
Fixes: 608831174100 ("KVM: x86: only do L1TF workaround on affected processors")
Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
Thanks for the excuse to clean up kvm_set_mmio_spte_mask(), been wanting a
reason to fix that mess for a few months now :-).
arch/x86/kvm/mmu/mmu.c | 27 +++++++++------------------
1 file changed, 9 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2df0f347655a4..aab90f4079ea9 100644
@@ -6136,25 +6136,16 @@ static void kvm_set_mmio_spte_mask(void)
- * Set the reserved bits and the present bit of an paging-structure
- * entry to generate page fault with PFER.RSV = 1.
+ * Set a reserved PA bit in MMIO SPTEs to generate page faults with
+ * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT
+ * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
+ * 52-bit physical addresses then there are no reserved PA bits in the
+ * PTEs and so the reserved PA approach must be disabled.
- * Mask the uppermost physical address bit, which would be reserved as
- * long as the supported physical address width is less than 52.
- mask = 1ull << 51;
- /* Set the present bit. */
- mask |= 1ull;
- * If reserved bit is not supported, clear the present bit to disable
- * mmio page fault.
- if (shadow_phys_bits == 52)
- mask &= ~1ull;
+ if (shadow_phys_bits < 52)
+ mask = BIT_ULL(51) | PT_PRESENT_MASK;
+ mask = 0;
kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);