Re: [PATCH v2 06/11] KVM: SVM: add wrappers to enable/disable IRET interception

From: Santosh Shukla
Date: Thu Dec 08 2022 - 07:10:10 EST




On 12/6/2022 5:44 PM, Maxim Levitsky wrote:
> On Mon, 2022-12-05 at 21:11 +0530, Santosh Shukla wrote:
>> On 11/30/2022 1:07 AM, Maxim Levitsky wrote:
>>> SEV-ES guests don't use IRET interception for the detection of
>>> an end of a NMI.
>>>
>>> Therefore it makes sense to create a wrapper to avoid repeating
>>> the check for the SEV-ES.
>>>
>>> No functional change is intended.
>>>
>>> Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>
>>> Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
>>> ---
>>> arch/x86/kvm/svm/svm.c | 28 +++++++++++++++++++---------
>>> 1 file changed, 19 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>>> index 512b2aa21137e2..cfed6ab29c839a 100644
>>> --- a/arch/x86/kvm/svm/svm.c
>>> +++ b/arch/x86/kvm/svm/svm.c
>>> @@ -2468,16 +2468,29 @@ static int task_switch_interception(struct kvm_vcpu *vcpu)
>>> has_error_code, error_code);
>>> }
>>>
>>> +static void svm_disable_iret_interception(struct vcpu_svm *svm)
>>> +{
>>> + if (!sev_es_guest(svm->vcpu.kvm))
>>> + svm_clr_intercept(svm, INTERCEPT_IRET);
>>> +}
>>> +
>>> +static void svm_enable_iret_interception(struct vcpu_svm *svm)
>>> +{
>>> + if (!sev_es_guest(svm->vcpu.kvm))
>>> + svm_set_intercept(svm, INTERCEPT_IRET);
>>> +}
>>> +
>>
>> nits:
>> s/_iret_interception / _iret_intercept
>> does that make sense?
>
> Makes sense. I can also move this to svm.h near the svm_set_intercept(), I think
> it better a better place for this function there if no objections.
>
I think current approach is fine since function used in svm.c only. but I have
no strong opinion on moving to svm.h either ways.

Thanks,
Santosh

> Best regards,
> Maxim Levitsky
>>
>> Thanks,
>> Santosh
>>
>>> static int iret_interception(struct kvm_vcpu *vcpu)
>>> {
>>> struct vcpu_svm *svm = to_svm(vcpu);
>>>
>>> ++vcpu->stat.nmi_window_exits;
>>> svm->awaiting_iret_completion = true;
>>> - if (!sev_es_guest(vcpu->kvm)) {
>>> - svm_clr_intercept(svm, INTERCEPT_IRET);
>>> +
>>> + svm_disable_iret_interception(svm);
>>> + if (!sev_es_guest(vcpu->kvm))
>>> svm->nmi_iret_rip = kvm_rip_read(vcpu);
>>> - }
>>> +
>>> kvm_make_request(KVM_REQ_EVENT, vcpu);
>>> return 1;
>>> }
>>> @@ -3470,8 +3483,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
>>> return;
>>>
>>> svm->nmi_masked = true;
>>> - if (!sev_es_guest(vcpu->kvm))
>>> - svm_set_intercept(svm, INTERCEPT_IRET);
>>> + svm_enable_iret_interception(svm);
>>> ++vcpu->stat.nmi_injections;
>>> }
>>>
>>> @@ -3614,12 +3626,10 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
>>>
>>> if (masked) {
>>> svm->nmi_masked = true;
>>> - if (!sev_es_guest(vcpu->kvm))
>>> - svm_set_intercept(svm, INTERCEPT_IRET);
>>> + svm_enable_iret_interception(svm);
>>> } else {
>>> svm->nmi_masked = false;
>>> - if (!sev_es_guest(vcpu->kvm))
>>> - svm_clr_intercept(svm, INTERCEPT_IRET);
>>> + svm_disable_iret_interception(svm);
>>> }
>>> }
>>>
>
>