Re: [PATCH v2 11/11] KVM: nSVM: implement support for nested VNMI
From: Sean Christopherson
Date: Tue Jan 31 2023 - 19:44:29 EST
On Tue, Nov 29, 2022, Maxim Levitsky wrote:
> This patch allows L1 to use vNMI to accelerate its injection
> of NMIs to L2 by passing through vNMI int_ctl bits from vmcb12
> to/from vmcb02.
>
> While L2 runs, L1's vNMI is inhibited, and L1's NMIs are injected
> normally.
Same feedback on stating the change as a command instead of describing the net
effects.
> In order to support nested VNMI requires saving and restoring the VNMI
> bits during nested entry and exit.
Again, avoid saving+restoring. And it's not just for terminology, it's not a
true save/restore, e.g. a pending vNMI for L1 needs to be recognized and trigger
a nested VM-Exit. I.e. KVM can't simply stash the state and restore it later,
KVM needs to actively process the pending NMI.
> In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to
> vmcb02 during entry and vice-versa during exit.
> And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to
> vmcb02 during entry and vice-versa during exit.
>
> Tested with the KVM-unit-test and Nested Guest scenario.
>
>
> Signed-off-by: Santosh Shukla <santosh.shukla@xxxxxxx>
> Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
Same SoB issues.
> ---
> arch/x86/kvm/svm/nested.c | 13 ++++++++++++-
> arch/x86/kvm/svm/svm.c | 5 +++++
> arch/x86/kvm/svm/svm.h | 6 ++++++
> 3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 5bea672bf8b12d..81346665058e26 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -278,6 +278,11 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
> if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl)))
> return false;
>
> + if (CC((control->int_ctl & V_NMI_ENABLE) &&
> + !vmcb12_is_intercept(control, INTERCEPT_NMI))) {
Align indentation.
if (CC((control->int_ctl & V_NMI_ENABLE) &&
!vmcb12_is_intercept(control, INTERCEPT_NMI))) {
return false;
}
> + return false;
> + }
> +
> return true;
> }
>
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 0b7e1790fadde1..8fb2085188c5ac 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -271,6 +271,7 @@ struct vcpu_svm {
> bool pause_filter_enabled : 1;
> bool pause_threshold_enabled : 1;
> bool vgif_enabled : 1;
> + bool vnmi_enabled : 1;
>
> u32 ldr_reg;
> u32 dfr_reg;
> @@ -545,6 +546,11 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
> return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE;
> }
>
> +static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
> +{
> + return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE);
Gah, the "nested" flags in vcpu_svm are super confusing. I initially read this
as "if vNMI is enabled in L1 and vmcb12".
I have a series that I originally prepped for the architectural LBRs series that
will allow turning this into
return guest_can_use(vcpu, X86_FEATURE_VNMI) &&
(svm->nested.ctl.int_ctl & V_NMI_ENABLE);
I'll get that series posted.
Nothing to do on your end, just an FYI. I'll sort out conflicts if/when they happen.