Re: [PATCH] KVM: SVM: refactor avic VM ID allocation
From: Paolo Bonzini
Date: Fri Aug 18 2017 - 12:05:44 EST
On 18/08/2017 17:22, Denys Vlasenko wrote:
> On 08/17/2017 04:33 PM, Paolo Bonzini wrote:
>> On 15/08/2017 22:12, Radim KrÄmÃÅ wrote:
>>> 2017-08-11 22:11+0200, Denys Vlasenko:
>>>> With lightly tweaked defconfig:
>>>>
>>>> text data bss dec hex filename
>>>> 11259661 5109408 2981888 19350957 12745ad vmlinux.before
>>>> 11259661 5109408 884736 17253805 10745ad vmlinux.after
>>>>
>>>> Only compile-tested.
>>>>
>>>> Signed-off-by: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
>>>> Cc: Joerg Roedel <joro@xxxxxxxxxx>
>>>> Cc: pbonzini@xxxxxxxxxx
>>>> Cc: rkrcmar@xxxxxxxxxx
>>>> Cc: tglx@xxxxxxxxxxxxx
>>>> Cc: mingo@xxxxxxxxxx
>>>> Cc: hpa@xxxxxxxxx
>>>> Cc: x86@xxxxxxxxxx
>>>> Cc: kvm@xxxxxxxxxxxxxxx
>>>> Cc: linux-kernel@xxxxxxxxxxxxxxx
>>>> ---
>>>
>>> Ah, I suspected my todo wasn't this short; thanks for the patch!
>>>
>>>> @@ -1468,6 +1433,22 @@ static int avic_vm_init(struct kvm *kvm)
>>>> clear_page(page_address(l_page));
>>>>
>>>> spin_lock_irqsave(&svm_vm_data_hash_lock, flags);
>>>> + again:
>>>> + vm_id = next_vm_id = (next_vm_id + 1) & AVIC_VM_ID_MASK;
>>>> + if (vm_id == 0) { /* id is 1-based, zero is not okay */
>>>
>>> Suravee, did the reserved zero mean something?
>>>
>>>> + next_vm_id_wrapped = 1;
>>>> + goto again;
>>>> + }
>>>> + /* Is it still in use? Only possible if wrapped at least once */
>>>> + if (next_vm_id_wrapped) {
>>>> + hash_for_each_possible(svm_vm_data_hash, ka, hnode, vm_id) {
>>>> + struct kvm *k2 = container_of(ka, struct kvm, arch);
>>>> + struct kvm_arch *vd2 = &k2->arch;
>>>> + if (vd2->avic_vm_id == vm_id)
>>>> + goto again;
>>>
>>> Although hitting the case where all 2^24 ids are already used is
>>> practically impossible, I don't like the loose end ...
>>
>> I think even the case where 2^16 ids are used deserves a medal. Why
>> don't we just make the bitmap 8 KiB and call it a day? :)
>
> Well, the RAM is cheap, but a 4-byte variable is still thousands of times
> smaller than a 8K bitmap.
>
> Since a 256 element hash table is used here, you need to have ~one hundred
> VMs to start seeing (very small) degradation in speed of creation of new
> VMs compared to bitmap approach, and that is only after 16777216 VMs
> were created since reboot.
I guess that's fair since we already have the hash table for other uses.
Paolo
> If you want to spend RAM on speeding this up, you can increase hash
> table size
> instead. That would speed up avic_ga_log_notifier() too.