Re: [PATCH v4 0/8] KVM: x86: hyperv: PV TLB flush for Windows guests
From: Wanpeng Li
Date: Fri May 18 2018 - 07:00:04 EST
2018-05-18 19:19 GMT+08:00 Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>:
> Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> writes:
>
>> Wanpeng Li <kernellwp@xxxxxxxxx> writes:
>>
>>> Hi Vitaly,
>>> 2018-05-16 23:21 GMT+08:00 Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>:
>>>> Changes since v3 [Radim Krcmar]:
>>>> - PATCH2 fixing 'HV_GENERIC_SET_SPARCE_4K' typo added.
>>>> - PATCH5 introducing kvm_make_vcpus_request_mask() API added.
>>>> - Fix undefined behavior for hv->vp_index >= 64.
>>>> - Merge kvm_hv_flush_tlb() and kvm_hv_flush_tlb_ex()
>>>> - For -ex case preload all banks with a single kvm_read_guest().
>>>>
>>>> Description:
>>>>
>>>> This is both a new feature and a bugfix.
>>>>
>>>> Bugfix description:
>>>>
>>>> It was found that Windows 2016 guests on KVM crash when they have > 64
>>>> vCPUs, non-flat topology (>1 core/thread per socket; in case it has >64
>>>> sockets Windows just ignores vCPUs above 64) and Hyper-V enlightenments
>>>
>>> We try the below command line, the Windows 2016 guest successfully to
>>> login and there are 80 vCPUs can be observed in the guest w/o the
>>> patchset, why you mentioned the crash and ignore?
>>>
>>> /usr/local/bin/qemu-system-x86_64 -machine pc-i440fx-rhel7.3.0 -m
>>> 8192 -smp 80,sockets=2,cores=40,threads=1 -device
>>> ide-drive,bus=ide.0,drive=test -drive
>>> id=test,if=none,file=/instanceimage/359b18ab-05bb-460d-9b53-89505bca68ed/359b18ab-05bb-460d-9b53-89505bca68ed_vda_1.qcow2
>>> -net nic,model=virtio -net user -monitor stdio -usb -usbdevice tablet
>>> --enable-kvm --cpu host -vnc 0.0.0.0:2
>>
>> Crash happens when you manifest yourself as Hyper-V, you can do this by
>> adding any 'hv-*' feature (e.g. try '-cpu host,hv_vpindex').
>
> Oh, and the 'ignore' happens when you pass more than 64 sockets
> (somthing like "-smp 128,sockets=128,cores=1,threads=1") -- and this
> happens regardless of Hyper-V enlightenments. But I guess it's just
> because Windows doesn't support more than 64 sockets.
Is there an option in the guest to avoid to check pvtlb support in hyperv?
Regards,
Wanpeng Li