Hi Sean,Booting through OVMF and kernel with no rootfs provided, and panic=-1 specified on theHmm, what is test doing?Combining the two sub-threads: both of the suggestions:Yes, it's either this or disabling the feature.Depending on the performance results of adding the hypercall toMinus making nested SVM (L3) mutually exclusive, I believe this will do the trick:
svm_flush_tlb_current, the fix could indeed be to just disable usage of
HV_X64_NESTED_ENLIGHTENED_TLB.
+ /* blah blah blah */
+ hv_flush_tlb_current(vcpu);
+
Paolo
a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()
b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current()
appear to work in my test case (L2 vm startup until panic due to missing rootfs).
But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB)
the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0.
kernel command line. It's a pure startup time test.
Have you been able to reproduce this by any chance?
I would be glad to see either of the two fixes getting merged (b) or a) if it doesn't require
special L3 nested handling) in order to get this regression resolved.