Re: [RFC PATCH] Add Hyperv extended hypercall support in KVM

From: Vipin Sharma
Date: Mon Oct 24 2022 - 16:12:43 EST


On Mon, Oct 24, 2022 at 8:22 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Mon, Oct 24, 2022, Vitaly Kuznetsov wrote:
> > While some 'extended' hypercalls may indeed need to be handled in KVM,
> > there's no harm done in forwarding all unknown-to-KVM hypercalls to
> > userspace. The only issue I envision is how would userspace discover
> > which extended hypercalls are supported by KVM in case it (userspace) is
> > responsible for handling HvExtCallQueryCapabilities call which returns
> > the list of supported hypercalls. E.g. in case we decide to implement
> > HvExtCallMemoryHeatHint in KVM, how are we going to communicate this to
> > userspace?
> >
> > Normally, VMM discovers the availability of Hyper-V features through
> > KVM_GET_SUPPORTED_HV_CPUID but extended hypercalls are not listed in
> > CPUID. This can be always be solved by adding new KVM CAPs of
> > course. Alternatively, we can add a single
> > "KVM_CAP_HYPERV_EXT_CALL_QUERY" which will just return the list of
> > extended hypercalls supported by KVM (which Vipin's patch adds anyway to
> > *set* the list instead).
>
> AIUI, the TLFS uses a 64-bit mask to enumerate which extended hypercalls are
> supported, so a single CAP should be a perfect fit. And KVM can use the capability
> to enumerate support for _and_ to allow userspace to enable in-kernel handling. E.g.
>
> check():
> case KVM_CAP_HYPERV_EXT_CALL:
> return KVM_SUPPORTED_HYPERV_EXT_CALL;
>
>
> enable():
>
> case KVM_CAP_HYPERV_EXT_CALL:
> r = -EINVAL;
> if (mask & ~KVM_SUPPORTED_HYPERV_EXT_CALL)
> break;
>
> mutex_lock(&kvm->lock);
> if (!kvm->created_vcpus) {

Any reason for setting capability only after vcpus are created?
Also, in my patch I wrote the ioctl at kvm_vcpu_ioctl_enable_cap() as
all of the hyperv related code was there but since this capability is
a vm setting not a per vcpu setting, should this be at kvm_vm_ioctl()
as a better choice?

> to_kvm_hv(kvm)->ext_call = cap->args[0];
> r = 0;
> }
> mutex_unlock(&kvm->lock);
>
> kvm_hv_hypercall()
>
>
> case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX:
> if (unlikely(hc.fast)) {
> ret = HV_STATUS_INVALID_PARAMETER;
> break;
> }
> if (!(hc.code & to_kvm_hv(vcpu->kvm)->ext_call))

It won't be directly this. There will be a mapping of hc.code to the
corresponding bit and then "&" with ext_call.


> goto hypercall_userspace_exit;
>
> ret = kvm_hv_ext_hypercall(...)
> break;
>
>
> That maintains backwards compatibility with "exit on everything" as userspace
> still needs to opt-in to having KVM handle specific hypercalls in-kernel, and it
> provides the necessary knob for userspace to tell KVM which hypercalls should be
> allowed, i.e. ensures KVM doesn't violate HV_EXT_CALL_QUERY_CAPABILITIES.

So, should I send a version with KVM capability similar to above or
for now just send the version which by default exit to userspace and
later whenever the need arises KVM capability can be added then?