Re: [PATCH] KVM: x86/mmu: Don't create SPTEs for addresses that aren't mappable

From: Edgecombe, Rick P

Date: Fri Feb 20 2026 - 19:02:06 EST


On Fri, 2026-02-20 at 16:54 +0000, Sean Christopherson wrote:
> > > > Did you consider the potential mismatch between the GFN passed to
> > > > kvm_flush_remote_tlbs_range() and the PTE's for different GFNs that
> > > > actually > > got touched. For example in recover_huge_pages_range(), if
> > > > it flushed the > > wrong range then the page table that got freed could
> > > > still be in the > > intermediate translation caches?
> >
> > I hadn't thought about this before you mentioned it, but I audited all code
> > > paths and all paths that lead to kvm_flush_remote_tlbs_range() use a >
> > "sanitized" gfn, i.e. KVM never emits a flush for the gfn reported by the >
> > fault.

Doh, sorry.

> >   Which meshes with a logical analysis as well: KVM only needs to flush when
> > > removing/changing an entry, and so should always derive the to-be-flushed
> > > ranges using the gfn that was used to make the change.
> >
> > And the "bad" gfn can never have TLB entries, because KVM never creates >
> > mappings.

Oh. I was under the impression that the fault gets its GPA bits stripped and
ends up mapping the page table mapping at a wrong different GPA. So if some
optimized GFN targeting flush was pointed at the unstripped GPA then it could
miss the GPA that actually got mapped and made it into the TLB. Anyway, it seems
moot.

> >
> > FWIW, even if KVM screwed up something like recover_huge_pages_range(), it >
> > wouldn't hurt the _host_.  Because from a host safety perspective, KVM x86 >
> > only needs to get it right in three paths: kvm_flush_shadow_all(),
> > __kvm_gmem_invalidate_begin(), and
> > kvm_mmu_notifier_invalidate_range_start().