[PATCH 00/18] KVM: Consolidate and optimize MMU notifiers
From: Sean Christopherson
Date: Thu Mar 25 2021 - 22:20:49 EST
The end goal of this series is to optimize the MMU notifiers to take
mmu_lock if and only if the notification is relevant to KVM, i.e. the hva
range overlaps a memslot. Large VMs (hundreds of vCPUs) are very
sensitive to mmu_lock being taken for write at inopportune times, and
such VMs also tend to be "static", e.g. backed by HugeTLB with minimal
page shenanigans. The vast majority of notifications for these VMs will
be spurious (for KVM), and eliding mmu_lock for spurious notifications
avoids an otherwise unacceptable disruption to the guest.
To get there without potentially degrading performance, e.g. due to
multiple memslot lookups, especially on non-x86 where the use cases are
largely unknown (from my perspective), first consolidate the MMU notifier
logic by moving the hva->gfn lookups into common KVM.
Applies on my TDP MMU TLB flushing bug fixes[*], which conflict horribly
with the TDP MMU changes in this series. That code applies on kvm/queue
(commit 4a98623d5d90, "KVM: x86/mmu: Mark the PAE roots as decrypted for
shadow paging").
Speaking of conflicts, Ben will soon be posting a series to convert a
bunch of TDP MMU flows to take mmu_lock only for read. Presumably there
will be an absurd number of conflicts; Ben and I will sort out the
conflicts in whichever series loses the race.
Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC,
PPC e500, and s390. Absolutely needs to be tested for real on non-x86,
I give it even odds that I introduced an off-by-one bug somewhere.
[*] https://lkml.kernel.org/r/20210325200119.1359384-1-seanjc@xxxxxxxxxx
Patches 1-7 are x86 specific prep patches to play nice with moving
the hva->gfn memslot lookups into common code. There ended up being waaay
more of these than I expected/wanted, but I had a hell of a time getting
the flushing logic right when shuffling the memslot and address space
loops. In the end, I was more confident I got things correct by batching
the flushes.
Patch 8 moves the existing API prototypes into common code. It could
technically be dropped since the old APIs are gone in the end, but I
thought the switch to the new APIs would suck a bit less this way.
Patch 9 moves arm64's MMU notifier tracepoints into common code so that
they are not lost when arm64 is converted to the new APIs, and so that all
architectures can benefit.
Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely
because I could actually test it. All architectures use nearly identical
code, so I don't think it actually matters in the end.
Patches 11-13 move arm64, MIPS, and PPC to the new APIs.
Patch 14 yanks out the old APIs.
Patch 15 adds the mmu_lock elision, but only for unpaired notifications.
Patch 16 adds mmu_lock elision for paired .invalidate_range_{start,end}().
This is quite nasty and no small part of me thinks the patch should be
burned with fire (I won't spoil it any further), but it's also the most
problematic scenario for our particular use case. :-/
Patches 17-18 are additional x86 cleanups.
Sean Christopherson (18):
KVM: x86/mmu: Coalesce TDP MMU TLB flushes when zapping collapsible
SPTEs
KVM: x86/mmu: Move flushing for "slot" handlers to caller for legacy
MMU
KVM: x86/mmu: Coalesce TLB flushes when zapping collapsible SPTEs
KVM: x86/mmu: Coalesce TLB flushes across address spaces for gfn range
zap
KVM: x86/mmu: Pass address space ID to __kvm_tdp_mmu_zap_gfn_range()
KVM: x86/mmu: Pass address space ID to TDP MMU root walkers
KVM: x86/mmu: Use leaf-only loop for walking TDP SPTEs when changing
SPTE
KVM: Move prototypes for MMU notifier callbacks to generic code
KVM: Move arm64's MMU notifier trace events to generic code
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based MMU notifier callbacks
KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks
KVM: PPC: Convert to the gfn-based MMU notifier callbacks
KVM: Kill off the old hva-based MMU notifier callbacks
KVM: Take mmu_lock when handling MMU notifier iff the hva hits a
memslot
KVM: Don't take mmu_lock for range invalidation unless necessary
KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if
possible
KVM: x86/mmu: Drop trace_kvm_age_page() tracepoint
arch/arm64/include/asm/kvm_host.h | 5 -
arch/arm64/kvm/mmu.c | 118 ++----
arch/arm64/kvm/trace_arm.h | 66 ----
arch/mips/include/asm/kvm_host.h | 5 -
arch/mips/kvm/mmu.c | 97 +----
arch/powerpc/include/asm/kvm_book3s.h | 12 +-
arch/powerpc/include/asm/kvm_host.h | 7 -
arch/powerpc/include/asm/kvm_ppc.h | 9 +-
arch/powerpc/kvm/book3s.c | 18 +-
arch/powerpc/kvm/book3s.h | 10 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +-
arch/powerpc/kvm/book3s_hv.c | 12 +-
arch/powerpc/kvm/book3s_pr.c | 56 +--
arch/powerpc/kvm/e500_mmu_host.c | 29 +-
arch/powerpc/kvm/trace_booke.h | 15 -
arch/x86/include/asm/kvm_host.h | 6 +-
arch/x86/kvm/mmu/mmu.c | 180 ++++-----
arch/x86/kvm/mmu/mmu_internal.h | 10 +
arch/x86/kvm/mmu/tdp_mmu.c | 344 +++++++-----------
arch/x86/kvm/mmu/tdp_mmu.h | 31 +-
include/linux/kvm_host.h | 22 +-
include/trace/events/kvm.h | 90 +++--
tools/testing/selftests/kvm/lib/kvm_util.c | 4 -
.../selftests/kvm/lib/x86_64/processor.c | 2 +
virt/kvm/kvm_main.c | 312 ++++++++++++----
26 files changed, 697 insertions(+), 886 deletions(-)
--
2.31.0.291.g576ba9dcdaf-goog