Re: [PATCH] KVM: MMU: Introduce single thread to zap collapsible sptes

From: Wanpeng Li
Date: Thu Dec 20 2018 - 19:46:32 EST


On Thu, 20 Dec 2018 at 22:43, Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx> wrote:
>
> 2018-12-06 15:58+0800, Wanpeng Li:
> > From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> >
> > Last year guys from huawei reported that the call of memory_global_dirty_log_start/stop()
> > takes 13s for 4T memory and cause guest freeze too long which increases the unacceptable
> > migration downtime. [1] [2]
> >
> > Guangrong pointed out:
> >
> > | collapsible_sptes zaps 4k mappings to make memory-read happy, it is not
> > | required by the semanteme of KVM_SET_USER_MEMORY_REGION and it is not
> > | urgent for vCPU's running, it could be done in a separate thread and use
> > | lock-break technology.
> >
> > [1] https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg05249.html
> > [2] https://www.mail-archive.com/qemu-devel@xxxxxxxxxx/msg449994.html
> >
> > Several TB memory guest is common now after NVDIMM is deployed in cloud environment.
> > This patch utilizes worker thread to zap collapsible sptes in order to lazy collapse
> > small sptes into large sptes during roll-back after live migration fails.
> >
> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> > Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
> > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> > ---
> > @@ -5679,14 +5679,41 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> > return need_tlb_flush;
> > }
> >
> > +void zap_collapsible_sptes_fn(struct work_struct *work)
> > +{
> > + struct kvm_memory_slot *memslot;
> > + struct kvm_memslots *slots;
> > + struct delayed_work *dwork = to_delayed_work(work);
> > + struct kvm_arch *ka = container_of(dwork, struct kvm_arch,
> > + kvm_mmu_zap_collapsible_sptes_work);
> > + struct kvm *kvm = container_of(ka, struct kvm, arch);
> > + int i;
> > +
> > + mutex_lock(&kvm->slots_lock);
> > + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> > + spin_lock(&kvm->mmu_lock);
> > + slots = __kvm_memslots(kvm, i);
> > + kvm_for_each_memslot(memslot, slots) {
> > + slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
> > + kvm_mmu_zap_collapsible_spte, true);
> > + if (need_resched() || spin_needbreak(&kvm->mmu_lock))
> > + cond_resched_lock(&kvm->mmu_lock);
>
> I think we shouldn't zap all memslots when kvm_mmu_zap_collapsible_sptes
> only wanted to zap a specific one.
> Please add a list of memslots to be zapped; delete from the list here
> and add in kvm_mmu_zap_collapsible_sptes().

Yeah, that's my original plan, however, i observe a lot of races here,
the memslot can disappear/modify underneath before the worker thread
start to zap even if i introduce lock to protect the list. This patch
delays the worker thread by 60s(to assume memory_global_dirty_log_stop
can absolutely complete) to coalesce all the zap requirements after
live migration fails.

Regards,
Wanpeng Li

>
> > + }
> > + spin_unlock(&kvm->mmu_lock);
> > + }
> > + kvm->arch.zap_in_progress = false;
> > + mutex_unlock(&kvm->slots_lock);
> > +}
> > +
> > +#define KVM_MMU_ZAP_DELAYED (60 * HZ)
> > void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> > const struct kvm_memory_slot *memslot)
> > {
> > - /* FIXME: const-ify all uses of struct kvm_memory_slot. */
> > - spin_lock(&kvm->mmu_lock);
> > - slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
> > - kvm_mmu_zap_collapsible_spte, true);
> > - spin_unlock(&kvm->mmu_lock);
> > + if (!kvm->arch.zap_in_progress) {
>
> The list can also serve in place of zap_in_progress -- if there were any
> elements in it, then there is no need to schedule the work again.
>
> Thanks.