Re: [RFC PATCH] Add Hyperv extended hypercall support in KVM
From: Vitaly Kuznetsov
Date: Mon Oct 24 2022 - 09:45:09 EST
Sean Christopherson <seanjc@xxxxxxxxxx> writes:
> On Fri, Oct 21, 2022, Vipin Sharma wrote:
>> On Fri, Oct 21, 2022 at 1:13 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>> >
>> > On Fri, Oct 21, 2022, Vipin Sharma wrote:
>> > > Hyperv hypercalls above 0x8000 are called as extended hypercalls as per
>> > > Hyperv TLFS. Hypercall 0x8001 is used to enquire about available
>> > > hypercalls by guest VMs.
>> > >
>> > > Add support for HvExtCallQueryCapabilities (0x8001) and
>> > > HvExtCallGetBootZeroedMemory (0x8002) in KVM.
>> > >
>> > > A guest VM finds availability of HvExtCallQueryCapabilities (0x8001) by
>> > > using CPUID.0x40000003.EBX BIT(20). If the bit is set then the guest VM
>> > > make hypercall HvExtCallQueryCapabilities (0x8001) to know what all
>> > > extended hypercalls are supported by hypervisor.
>> > >
>> > > A userspace VMM can query capability KVM_CAP_HYPERV_EXT_CALL_QUERY to
>> > > know which extended hypercalls are supported in KVM. After which the
>> > > userspace will enable capabilities for the guest VM.
>> > >
>> > > HvExtCallQueryCapabilities (0x8001) is handled by KVM in kernel,
>> >
>> > Does this really need to be handle by KVM? I assume this is a rare operation,
>> > e.g. done once during guest boot, so performance shouldn't be a concern. To
>> > avoid breaking existing userspace, KVM can forward HV_EXT_CALL_GET_BOOT_ZEROED_MEMORY
>> > to userspace if and only if HV_ENABLE_EXTENDED_HYPERCALLS is enabled in CPUID,
>> > but otherwise KVM can let userspace deal with the "is this enabled" check.
>>
>> There are 4 more extended hypercalls mentioned in TLFS but there is no
>> detail about them in the document. From the linux source code one of
>> the hypercall HvExtCallMemoryHeatHint (0x8003) is a repetitive call.
>> In the file drivers/hv/hv_balloon.c
>> status = hv_do_rep_hypercall(HV_EXT_CALL_MEMORY_HEAT_HINT,
>> nents, 0, hint, NULL);
>>
>> This makes me a little bit wary that these hypercalls or any future
>> hypercalls can have high calling frequency by Windows guest. Also, it
>> is not clear which calls can or cannot be satisfied by userspace
>> alone.
>
> If future support needs to be moved into KVM, e.g. for performance reasons, then
> we can do that if necessary.
>
>> So, I am not sure if the default exit to userspace for all of the
>> extended hypercalls will be future proof, therefore, I went with the
>> approach of only selectively exiting to userspace based on hypercall.
>
> But punting on everything _might_ be future proof, whereas the only way that
> selectively exiting ends up being future proof is if no one ever wants to support
> another extended hypercall.
While some 'extended' hypercalls may indeed need to be handled in KVM,
there's no harm done in forwarding all unknown-to-KVM hypercalls to
userspace. The only issue I envision is how would userspace discover
which extended hypercalls are supported by KVM in case it (userspace) is
responsible for handling HvExtCallQueryCapabilities call which returns
the list of supported hypercalls. E.g. in case we decide to implement
HvExtCallMemoryHeatHint in KVM, how are we going to communicate this to
userspace?
Normally, VMM discovers the availability of Hyper-V features through
KVM_GET_SUPPORTED_HV_CPUID but extended hypercalls are not listed in
CPUID. This can be always be solved by adding new KVM CAPs of
course. Alternatively, we can add a single
"KVM_CAP_HYPERV_EXT_CALL_QUERY" which will just return the list of
extended hypercalls supported by KVM (which Vipin's patch adds anyway to
*set* the list instead).
TL;DR: handling HvExtCallQueryCapabilities and all unknown-to-kvm (now:
all) extended hypercalls in userspace sounds like the right approach.
--
Vitaly