Re: [PATCH v4 6/7] KVM: x86/mmu: Skip rmap operations if rmaps not allocated

From: Sean Christopherson
Date: Tue May 11 2021 - 15:51:48 EST


On Tue, May 11, 2021, Ben Gardon wrote:
> @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
> int i;
> bool write_protected = false;
>
> - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) {
> - rmap_head = __gfn_to_rmap(gfn, i, slot);
> - write_protected |= __rmap_write_protect(kvm, rmap_head, true);
> + if (kvm->arch.memslots_have_rmaps) {
> + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) {
> + rmap_head = __gfn_to_rmap(gfn, i, slot);
> + write_protected |= __rmap_write_protect(kvm, rmap_head,
> + true);

I vote to let "true" poke out.

> + }
> }
>
> if (is_tdp_mmu_enabled(kvm))

...

> @@ -5440,7 +5455,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
> */
> kvm_reload_remote_mmus(kvm);
>
> - kvm_zap_obsolete_pages(kvm);
> + if (kvm->arch.memslots_have_rmaps)
> + kvm_zap_obsolete_pages(kvm);

Hmm, for cases where we're iterating over the list of active_mmu_pages, I would
prefer to either leave the code as-is or short-circuit the helpers with a more
explicit:

if (list_empty(&kvm->arch.active_mmu_pages))
return ...;

I'd probably vote for leaving the code as it is; the loop iteration and list_empty
check in kvm_mmu_commit_zap_page() add a single compare-and-jump in the worst
case scenario.

In other words, restrict use of memslots_have_rmaps to flows that directly
walk the rmaps, as opposed to partially overloading memslots_have_rmaps to mean
"is using legacy MMU".

> write_unlock(&kvm->mmu_lock);
>

...

> @@ -5681,6 +5702,14 @@ void kvm_mmu_zap_all(struct kvm *kvm)
> int ign;
>
> write_lock(&kvm->mmu_lock);
> + if (is_tdp_mmu_enabled(kvm))
> + kvm_tdp_mmu_zap_all(kvm);
> +
> + if (!kvm->arch.memslots_have_rmaps) {
> + write_unlock(&kvm->mmu_lock);
> + return;

Another case where falling through to walking active_mmu_pages is perfectly ok.

> + }
> +
> restart:
> list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
> if (WARN_ON(sp->role.invalid))
> @@ -5693,9 +5722,6 @@ void kvm_mmu_zap_all(struct kvm *kvm)
>
> kvm_mmu_commit_zap_page(kvm, &invalid_list);
>
> - if (is_tdp_mmu_enabled(kvm))
> - kvm_tdp_mmu_zap_all(kvm);
> -
> write_unlock(&kvm->mmu_lock);
> }
>
> --
> 2.31.1.607.g51e8a6a459-goog
>