On Thu, Mar 07, 2024 at 11:49:11AM +1300, Huang, Kai wrote:
On 7/03/2024 11:43 am, Sean Christopherson wrote:
On Thu, Mar 07, 2024, Kai Huang wrote:
On 28/02/2024 3:41 pm, Sean Christopherson wrote:
Explicitly detect and disallow private accesses to emulated MMIO in
kvm_handle_noslot_fault() instead of relying on kvm_faultin_pfn_private()
to perform the check. This will allow the page fault path to go straight
to kvm_handle_noslot_fault() without bouncing through __kvm_faultin_pfn().
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5c8caab64ba2..ebdb3fcce3dc 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3314,6 +3314,11 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu,
{
gva_t gva = fault->is_tdp ? 0 : fault->addr;
+ if (fault->is_private) {
+ kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
+ return -EFAULT;
+ }
+
As mentioned in another reply in this series, unless I am mistaken, for TDX
guest the _first_ MMIO access would still cause EPT violation with MMIO GFN
being private.
Returning to userspace cannot really help here because the MMIO mapping is
inside the guest.
That's a guest bug. The guest *knows* it's a TDX VM, it *has* to know. Accessing
emulated MMIO and thus taking a #VE before enabling paging is nonsensical. Either
enable paging and setup MMIO regions as shared, or go straight to TDCALL.
+Kirill,
I kinda forgot the detail, but what I am afraid is there might be bunch of
existing TDX guests (since TDX guest code is upstream-ed) using unmodified
drivers, which doesn't map MMIO regions as shared I suppose.
Unmodified drivers gets their MMIO regions mapped with ioremap() that sets
shared bit, unless asked explicitly to make it private (encrypted).