[PATCH 4/5] KVM: pfncache: Add lockless checking of cache invalidation events
From: Sean Christopherson
Date: Tue Oct 15 2024 - 16:57:54 EST
When handling an mmu_notifier invalidation event, perform the initial check
for overlap with a valid gpc_to_pfn_cache without acquiring the cache's
lock, and instead ensure either the invalidation event observes a valid
cache, or the cache observes the invalidation event.
Locklessly checking gpc->valid and gpc->uhva relies on several existing
invariants:
- Changing gpc->uhva requires performing a refresh()
- A cache can be made valid only during refresh()
- Only one task can execute refresh() at a time
- Tasks must check() a cache after a successful refresh()
- check() must hold gpc->lock (usually for read)
The existing invariants means that if KVM observes an invalid gpc, or if
the uhva is unstable, then a refresh() is in-progress or will be performed
prior to consuming the new uhva. And so KVM simply needs to ensure that
refresh() sees the invalidation and retries, or that the invalidation sees
the fully valid gpc.
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
virt/kvm/pfncache.c | 51 ++++++++++++++++++++++++++-------------------
1 file changed, 29 insertions(+), 22 deletions(-)
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 3d5bc9bab3d9..2163bb6b899c 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -48,32 +48,32 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
max(kvm->mmu_gpc_invalidate_range_end, end);
}
+ /*
+ * Ensure that either each cache sees the to-be-invalidated range and
+ * retries if necessary, or that this task sees the cache's valid flag
+ * and invalidates the cache before completing the mmu_notifier event.
+ * Note, gpc->uhva must be set before gpc->valid, and if gpc->uhva is
+ * modified the cache must be re-validated. Pairs with the smp_mb() in
+ * hva_to_pfn_retry().
+ */
+ smp_mb__before_atomic();
+
spin_lock(&kvm->gpc_lock);
list_for_each_entry(gpc, &kvm->gpc_list, list) {
- read_lock_irq(&gpc->lock);
-
- /* Only a single page so no need to care about length */
- if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
- gpc->uhva >= start && gpc->uhva < end) {
- read_unlock_irq(&gpc->lock);
-
- /*
- * There is a small window here where the cache could
- * be modified, and invalidation would no longer be
- * necessary. Hence check again whether invalidation
- * is still necessary once the write lock has been
- * acquired.
- */
-
- write_lock_irq(&gpc->lock);
- if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
- gpc->uhva >= start && gpc->uhva < end)
- gpc->valid = false;
- write_unlock_irq(&gpc->lock);
+ if (!gpc->valid || gpc->uhva < start || gpc->uhva >= end)
continue;
- }
- read_unlock_irq(&gpc->lock);
+ write_lock_irq(&gpc->lock);
+
+ /*
+ * Verify the cache still needs to be invalidated after
+ * acquiring gpc->lock, to avoid an unnecessary invalidation
+ * in the unlikely scenario the cache became valid with a
+ * different userspace virtual address.
+ */
+ if (gpc->valid && gpc->uhva >= start && gpc->uhva < end)
+ gpc->valid = false;
+ write_unlock_irq(&gpc->lock);
}
spin_unlock(&kvm->gpc_lock);
}
@@ -266,6 +266,13 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
*/
WARN_ON_ONCE(gpc->valid);
gpc->valid = true;
+
+ /*
+ * Ensure valid is visible before checking if retry is needed.
+ * Pairs with the smp_mb__before_atomic() in
+ * gfn_to_pfn_cache_invalidate().
+ */
+ smp_mb();
} while (gpc_invalidate_retry_hva(gpc, mmu_seq));
gpc->pfn = new_pfn;
--
2.47.0.rc1.288.g06298d1525-goog
--oDMvAmx7QP3BxXt9
Content-Type: text/x-diff; charset=us-ascii
Content-Disposition: attachment;
filename="0005-KVM-pfncache-Wait-for-pending-invalidations-instead-.patch"