Re: [PATCH 09/12] KVM: MMU: coalesce zapping page after mmu_sync_children

From: Paolo Bonzini
Date: Thu Feb 25 2016 - 03:46:59 EST




On 25/02/2016 03:15, Takuya Yoshikawa wrote:
> On 2016/02/24 22:17, Paolo Bonzini wrote:
>> Move the call to kvm_mmu_flush_or_zap outside the loop.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
>> ---
>> arch/x86/kvm/mmu.c | 9 ++++++---
>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 725316df32ec..6d47b5c43246 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
>> struct mmu_page_path parents;
>> struct kvm_mmu_pages pages;
>> LIST_HEAD(invalid_list);
>> + bool flush = false;
>>
>> while (mmu_unsync_walk(parent, &pages)) {
>> bool protected = false;
>> - bool flush = false;
>>
>> for_each_sp(pages, sp, parents, i)
>> protected |= rmap_write_protect(vcpu, sp->gfn);
>>
>> - if (protected)
>> + if (protected) {
>> kvm_flush_remote_tlbs(vcpu->kvm);
>> + flush = false;
>> + }
>>
>> for_each_sp(pages, sp, parents, i) {
>> flush |= kvm_sync_page(vcpu, sp, &invalid_list);
>> mmu_pages_clear_parents(&parents);
>> }
>> - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
>> cond_resched_lock(&vcpu->kvm->mmu_lock);
>
> This may release the mmu_lock before committing the zapping.
> Is it safe? If so, we may want to see the reason in the changelog.

It should be safe; the page is already marked as invalid and hence the
role will not match in kvm_mmu_get_page.

The idea is simply that committing the zap is expensive (for example it
requires a remote TLB flush) so you want to do it as rarely as possible.
I'll note this in the commit message.

Paolo

> Takuya
>
>> }
>> +
>> + kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
>> }
>>
>> static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp)
>>
>
>
>