Re: [PATCH 2/2] KVM: x86/mmu: Bail out kvm_tdp_map_page() when VM dead

From: Sean Christopherson
Date: Tue Feb 18 2025 - 11:05:21 EST


On Mon, Feb 17, 2025, Yan Zhao wrote:
> Bail out of the loop in kvm_tdp_map_page() when a VM is dead. Otherwise,
> kvm_tdp_map_page() may get stuck in the kernel loop when there's only one
> vCPU in the VM (or if the other vCPUs are not executing ioctls), even if
> fatal errors have occurred.
>
> kvm_tdp_map_page() is called by the ioctl KVM_PRE_FAULT_MEMORY or the TDX
> ioctl KVM_TDX_INIT_MEM_REGION. It loops in the kernel whenever RET_PF_RETRY
> is returned. In the TDP MMU, kvm_tdp_mmu_map() always returns RET_PF_RETRY,
> regardless of the specific error code from tdp_mmu_set_spte_atomic(),
> tdp_mmu_link_sp(), or tdp_mmu_split_huge_page(). While this is acceptable
> in general cases where the only possible error code from these functions is
> -EBUSY, TDX introduces an additional error code, -EIO, due to SEAMCALL
> errors.
>
> Since this -EIO error is also a fatal error, check for VM dead in the
> kvm_tdp_map_page() to avoid unnecessary retries until a signal is pending.
>
> The error -EIO is uncommon and has not been observed in real workloads.
> Currently, it is only hypothetically triggered by bypassing the real
> SEAMCALL and faking an error in the SEAMCALL wrapper.
>
> Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx>
> ---
> arch/x86/kvm/mmu/mmu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 08ed5092c15a..3a8d735939b5 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4700,6 +4700,10 @@ int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code, u8 *level
> do {
> if (signal_pending(current))
> return -EINTR;
> +
> + if (vcpu->kvm->vm_dead)

This needs to be READ_ONCE(). Along those lines, I think I'd prefer

if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))
return -EIO;

or

if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu))
return -EIO;

so that if more terminal requests come long, we can bundle everything into a
single check via a selective version of kvm_request_pending().