Re: [PATCH] KVM: x86/mmu: Don't create SPTEs for addresses that aren't mappable

From: Sean Christopherson

Date: Fri Feb 20 2026 - 11:55:00 EST


+lists, because I'm confident there's no host attack.

On Thu, Feb 19, 2026, Edgecombe, Rick P wrote:
> On Wed, 2026-02-18 at 16:22 -0800, Sean Christopherson wrote:
> > In practice, the flaw is benign (other than the new WARN) as it only
> > affects guests that ignore guest.MAXPHYADDR (e.g. on CPUs with 52-bit
> > physical addresses but only 4-level paging) or guests being run by a
> > misbehaving userspace VMM (e.g. a VMM that ignored allow_smaller_maxphyaddr
> > or is pre-faulting bad addresses).
>
> I tried to look at whether this is true from a hurt-the-host perspective.
>
> Did you consider the potential mismatch between the GFN passed to
> kvm_flush_remote_tlbs_range() and the PTE's for different GFNs that actually got
> touched. For example in recover_huge_pages_range(), if it flushed the wrong
> range then the page table that got freed could still be in the intermediate
> translation caches?

I hadn't thought about this before you mentioned it, but I audited all code paths
and all paths that lead to kvm_flush_remote_tlbs_range() use a "sanitized" gfn,
i.e. KVM never emits a flush for the gfn reported by the fault. Which meshes with
a logical analysis as well: KVM only needs to flush when removing/changing an
entry, and so should always derive the to-be-flushed ranges using the gfn that
was used to make the change.

And the "bad" gfn can never have TLB entries, because KVM never creates mappings.

FWIW, even if KVM screwed up something like recover_huge_pages_range(), it wouldn't
hurt the _host_. Because from a host safety perspective, KVM x86 only needs to get
it right in three paths: kvm_flush_shadow_all(), __kvm_gmem_invalidate_begin(), and
kvm_mmu_notifier_invalidate_range_start().

> I'm not sure how this HV flush stuff actually works in practice, especially on
> those details. So not raising any red flags. Just thought maybe worth
> considering.