+ * without VMM intervention, so return a corresponding internal error
+ * instead (otherwise, vCPU will fall into infinite loop trying to
+ * deliver the event again and again).
+ */
+ if (error_code & PFERR_EVT_DELIVERY) {
Hmm, I'm 99% certain handling error in this location is wrong, and I'm also pretty
sure it's unnecessary. Or rather, the synthetic error code is unnecessary.
It's wrong because this path specifically handles "cached" MMIO, i.e. emulated
MMIO that is triggered by a special MMIO SPTE. KVM should punt to userspace on
_any_ MMIO emulation. KVM has gotten away with the flaw because SVM is completely
broken, and VMX can always generate reserved EPTEs. But with SVM, on CPUs with
MAXPHYADDR=52, KVM can't generate a reserved #PF, i.e. can't do cached MMIO, and
so I'm pretty sure your test would fail on those CPUs since they'll never come
down this path.
Heh, though I bet the introduction of RET_PF_WRITE_PROTECTED has regressed shadow
paging on CPUs with PA52.
Anyways, the synthetic PFERR flag is unnecessary because the information is readily
available to {vmx,svm}_check_emulate_instruction(). Ha! And EMULTYPE_WRITE_PF_TO_SP
means vendor code can even precisely identify MMIO.
I think another X86EMUL_* return type is needed, but that's better than a synthetic
#PF error code flag.
- /*
- * Note:
- * Do not try to fix EXIT_REASON_EPT_MISCONFIG if it caused by
- * delivery event since it indicates guest is accessing MMIO.
- * The vm-exit can be triggered again after return to guest that
- * will cause infinite loop.
- */
if ((vectoring_info & VECTORING_INFO_VALID_MASK) &&
(exit_reason.basic != EXIT_REASON_EXCEPTION_NMI &&
exit_reason.basic != EXIT_REASON_EPT_VIOLATION &&
exit_reason.basic != EXIT_REASON_PML_FULL &&
exit_reason.basic != EXIT_REASON_APIC_ACCESS &&
exit_reason.basic != EXIT_REASON_TASK_SWITCH &&
- exit_reason.basic != EXIT_REASON_NOTIFY)) {
+ exit_reason.basic != EXIT_REASON_NOTIFY &&
+ exit_reason.basic != EXIT_REASON_EPT_MISCONFIG)) {
Changing the behavior of EPT_MISCONFIG belongs in a separate commit.
Huh, and that's technically a bug fix. If userspace _creates_ a memslot, KVM
doesn't eagerly zap MMIO SPTEs and instead relies on vcpu_match_mmio_gen() to
force kvm_mmu_page_fault() down the actual page fault path. If the guest somehow
manages to generate an access to the new page while vectoring an event, KVM will
spuriously exit to userspace instead of trying to fault-in the new page.
It's _ridiculously_ contrived, but technically a bug.
Ugh, and the manual call to vmx_check_emulate_instruction() in handle_ept_misconfig()
is similarly flawed, though encountering that is even more contrived as that only
affects accesses from SGX enclaves.
Hmm, and looking at all of this, SVM doesn't take advantage of KVM_FAST_MMIO_BUS.
Unless I'm forgetting some fundamental incompatibility, SVM can do fast MMIO so
long as next_rip is valid.
Anyways, no need to deal with vmx_check_emulate_instruction() or fast MMIO, I'll
tackle that in a separate series. But for this series, please do the EPT misconfig
in a separate patch from fixing SVM. E.g. extract the helper, convert VMX to the
new flow, and then teach SVM to do the same.
gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
- bool is_mmio = exit_reason.basic == EXIT_REASON_EPT_MISCONFIG;
-
Blank newline after variable declarations.
- kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, is_mmio);
+ kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, false);
return 0;
}
All in all, I think this is the basic gist? Definitely feel free to propose a
better name than X86EMUL_UNHANDLEABLE_VECTORING.