Re: [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE
From: Xiaoyao Li
Date: Mon Mar 09 2026 - 21:30:29 EST
On 3/9/2026 10:23 PM, Sean Christopherson wrote:
On Mon, Mar 09, 2026, pcjer wrote:
Signed-off-by: pcjer <pcj3195161583@xxxxxxx>
---
arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 1266d5452..8482a85d6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1025,8 +1025,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
slot = gfn_to_memslot(kvm, gfn);
if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
- (gfn & mask) < start ||
- end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
+ (gfn & ~mask) < start ||
+ end < (gfn & ~mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
Somewhat to my surprise, this does indeed look like a legitimate fix, ignoring
that the code in question was never merged and was lasted posted 2+ years ago[*]
(and has long since been replaced).
The bug likely went unnoticed during development because "(gfn & mask) < start"
would almost always be true (mask == 511 for a 2MiB page). Though mask should
really just be inverted from the get go in this code
+ if (is_private && kvm_gfn_shared_mask(kvm) &&
+ is_large_pte(iter.old_spte)) {
+ gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
+ gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
+
+ struct kvm_memory_slot *slot;
+ struct kvm_mmu_page *sp;
+
[*] https://lore.kernel.org/all/c656573ccc68e212416d323d35f884bff25e6e2d.1708933624.git.isaku.yamahata@xxxxxxxxx
/faceplam, the buggy code was written by me.