Re: [PATCH V2 2/4] KVM: SVM: Fix nested NPF injection to set PFERR_GUEST_{PAGE,FINAL}_MASK

From: Kevin Cheng

Date: Fri Mar 13 2026 - 01:37:14 EST


On Fri, Mar 13, 2026 at 12:50 AM Kevin Cheng <chengkev@xxxxxxxxxx> wrote:
>
> On Tue, Feb 24, 2026 at 11:42 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >
> > On Tue, Feb 24, 2026, Kevin Cheng wrote:
> > > When KVM emulates an instruction for L2 and encounters a nested page
> > > fault (e.g., during string I/O emulation), nested_svm_inject_npf_exit()
> > > injects an NPF to L1. However, the code incorrectly hardcodes
> > > (1ULL << 32) for exit_info_1's upper bits when the original exit was
> > > not an NPF. This always sets PFERR_GUEST_FINAL_MASK even when the fault
> > > occurred on a page table page, preventing L1 from correctly identifying
> > > the cause of the fault.
> > >
> > > Set PFERR_GUEST_PAGE_MASK in the error code when a nested page fault
> > > occurs during a guest page table walk, and PFERR_GUEST_FINAL_MASK when
> > > the fault occurs on the final GPA-to-HPA translation.
> > >
> > > Widen error_code in struct x86_exception from u16 to u64 to accommodate
> > > the PFERR_GUEST_* bits (bits 32 and 33).
> >
> > Stale comment as this was moved to a separate patch.
> >
> > > Update nested_svm_inject_npf_exit() to use fault->error_code directly
> > > instead of hardcoding the upper bits. Also add a WARN_ON_ONCE if neither
> > > PFERR_GUEST_FINAL_MASK nor PFERR_GUEST_PAGE_MASK is set, as this would
> > > indicate a bug in the page fault handling code.
> > >
> > > Signed-off-by: Kevin Cheng <chengkev@xxxxxxxxxx>
> > > ---
> > > arch/x86/include/asm/kvm_host.h | 2 ++
> > > arch/x86/kvm/mmu/paging_tmpl.h | 22 ++++++++++------------
> > > arch/x86/kvm/svm/nested.c | 19 +++++++++++++------
> > > 3 files changed, 25 insertions(+), 18 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > index ff07c45e3c731..454f84660edfc 100644
> > > --- a/arch/x86/include/asm/kvm_host.h
> > > +++ b/arch/x86/include/asm/kvm_host.h
> > > @@ -280,6 +280,8 @@ enum x86_intercept_stage;
> > > #define PFERR_GUEST_RMP_MASK BIT_ULL(31)
> > > #define PFERR_GUEST_FINAL_MASK BIT_ULL(32)
> > > #define PFERR_GUEST_PAGE_MASK BIT_ULL(33)
> > > +#define PFERR_GUEST_FAULT_STAGE_MASK \
> > > + (PFERR_GUEST_FINAL_MASK | PFERR_GUEST_PAGE_MASK)
> > > #define PFERR_GUEST_ENC_MASK BIT_ULL(34)
> > > #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35)
> > > #define PFERR_GUEST_VMPL_MASK BIT_ULL(36)
> > > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> > > index 37eba7dafd14f..f148c92b606ba 100644
> > > --- a/arch/x86/kvm/mmu/paging_tmpl.h
> > > +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> > > @@ -385,18 +385,12 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
> > > real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn),
> > > nested_access, &walker->fault);
> > >
> > > - /*
> > > - * FIXME: This can happen if emulation (for of an INS/OUTS
> > > - * instruction) triggers a nested page fault. The exit
> > > - * qualification / exit info field will incorrectly have
> > > - * "guest page access" as the nested page fault's cause,
> > > - * instead of "guest page structure access". To fix this,
> > > - * the x86_exception struct should be augmented with enough
> > > - * information to fix the exit_qualification or exit_info_1
> > > - * fields.
> > > - */
> > > - if (unlikely(real_gpa == INVALID_GPA))
> > > + if (unlikely(real_gpa == INVALID_GPA)) {
> > > +#if PTTYPE != PTTYPE_EPT
> >
> > I would rather swap the order of patches two and three, so that we end up with
> > a "positive" if-statement. I.e. add EPT first so that we get (spoiler alert):
> >
> > #if PTTYPE == PTTYPE_EPT
> > walker->fault.exit_qualification |= EPT_VIOLATION_GVA_IS_VALID;
> > #else
> > walker->fault.error_code |= PFERR_GUEST_PAGE_MASK;
> > #endif
> >
> > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > > index de90b104a0dd5..1013e814168b5 100644
> > > --- a/arch/x86/kvm/svm/nested.c
> > > +++ b/arch/x86/kvm/svm/nested.c
> > > @@ -40,18 +40,25 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
> > > struct vmcb *vmcb = svm->vmcb;
> > >
> > > if (vmcb->control.exit_code != SVM_EXIT_NPF) {
> > > - /*
> > > - * TODO: track the cause of the nested page fault, and
> > > - * correctly fill in the high bits of exit_info_1.
> > > - */
> > > - vmcb->control.exit_code = SVM_EXIT_NPF;
> > > - vmcb->control.exit_info_1 = (1ULL << 32);
> > > + vmcb->control.exit_info_1 = fault->error_code;
> > > vmcb->control.exit_info_2 = fault->address;
> > > }
> > >
> > > + vmcb->control.exit_code = SVM_EXIT_NPF;
> > > vmcb->control.exit_info_1 &= ~0xffffffffULL;
> > > vmcb->control.exit_info_1 |= fault->error_code;
> > >
> > > + /*
> > > + * All nested page faults should be annotated as occurring on the
> > > + * final translation *or* the page walk. Arbitrarily choose "final"
> > > + * if KVM is buggy and enumerated both or neither.
> > > + */
> > > + if (WARN_ON_ONCE(hweight64(vmcb->control.exit_info_1 &
> > > + PFERR_GUEST_FAULT_STAGE_MASK) != 1)) {
> > > + vmcb->control.exit_info_1 &= ~PFERR_GUEST_FAULT_STAGE_MASK;
> > > + vmcb->control.exit_info_1 |= PFERR_GUEST_FINAL_MASK;
> > > + }
> >
> > This is all kinds of messy. KVM _appears_ to still rely on the hardware-reported
> > address + error_code
> >
> > if (vmcb->control.exit_code != SVM_EXIT_NPF) {
> > vmcb->control.exit_info_1 = fault->error_code;
> > vmcb->control.exit_info_2 = fault->address;
> > }
> >
> > But then drops bits 31:0 in favor of the fault error code. Then even more
> > bizarrely, bitwise-ORs bits 63:32 and WARNs if multiple bits in
> > PFERR_GUEST_FAULT_STAGE_MASK are set. In practice, the bitwise-OR of 63:32 is
> > _only_ going to affect PFERR_GUEST_FAULT_STAGE_MASK, because the other defined
> > bits are all specific to SNP, and KVM doesn't support nested virtualization for
> > SEV+.
> >
> > So I don't understand why this isn't simply:
> >
> > vmcb->control.exit_code = SVM_EXIT_NPF;
> > vmcb->control.exit_info_1 = fault->error_code;
>
> The issue with this is that the PFERR_GUEST_FAULT_STAGE_MASK bits are
> not set in the walker fault error code for hardware reported NPFs
> (non-emulator faults).
>
> The active mmu after an NPF exit from L2 is the guest_mmu. In
> walk_addr_generic, we only set the PFERR_GUEST_FAULT_STAGE_MASK bits
> in the fault error code if kvm_translate_gpa returns INVALID_GPA (i.e.
> when mmu == &vcpu->arch.nested_mmu and the translation fails).
> Otherwise, kvm_translate_gpa just returns the gpa and we don't set the
> PFERR_GUEST_FAULT_STAGE_MASK bits in the fault error code.
>
> Even when we NPF exit from L2, the hardware-reported exit_info[31:0]
> isn't accurate which is why we drop 31:0 in favor of the fault error
> code. I observed the hardware error code vs fault error code while
> running the KUTs (specifically the tests that expect faults) in
> https://lore.kernel.org/all/20260312200308.3089379-9-chengkev@xxxxxxxxxx/
> and they were not as expected. I believe I saw different error codes
> because the entry existed in L1's NPT, but not yet in L0's NPT? The
> hardware-reported error code usually did not reflect the L1 NPT page
> permission restrictions, only the type of access/op that caused the
> fault. Additionally, the present bit was not set.
>
> If L2 has never accessed a GPA present in the L1 NPT with restricted
> permissions, we could see differing error codes between hardware and
> the fault error codes. I think that is why the code before my change
> dropped bits 31:0 in favor of the walker fault error code. I'm not
> entirely sure though so I could be wrong.
>
> Since the guest_mmu walker can't know whether the GPA it's translating
> is a page table page or a final page, I think we are stuck with
> combining bits 63:32 from the hardware reported error code and 31:0 of
> the fault error code for the non-emulator NPF injection case.
>

Oops this was already pointed out in
https://lore.kernel.org/all/aZ4J0flo0SwjAWgW@xxxxxxxxxx/ lol