Re: [PATCH v5 11/13] KVM: Zap existing KVM mappings when pages changed in the private fd
From: Sean Christopherson
Date: Thu Apr 07 2022 - 23:07:03 EST
On Tue, Apr 05, 2022, Michael Roth wrote:
> On Thu, Mar 10, 2022 at 10:09:09PM +0800, Chao Peng wrote:
> > static inline bool kvm_slot_is_private(const struct kvm_memory_slot *slot)
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 67349421eae3..52319f49d58a 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -841,8 +841,43 @@ static int kvm_init_mmu_notifier(struct kvm *kvm)
> > #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */
> >
> > #ifdef CONFIG_MEMFILE_NOTIFIER
> > +static void kvm_memfile_notifier_handler(struct memfile_notifier *notifier,
> > + pgoff_t start, pgoff_t end)
> > +{
> > + int idx;
> > + struct kvm_memory_slot *slot = container_of(notifier,
> > + struct kvm_memory_slot,
> > + notifier);
> > + struct kvm_gfn_range gfn_range = {
> > + .slot = slot,
> > + .start = start - (slot->private_offset >> PAGE_SHIFT),
> > + .end = end - (slot->private_offset >> PAGE_SHIFT),
> > + .may_block = true,
> > + };
> > + struct kvm *kvm = slot->kvm;
> > +
> > + gfn_range.start = max(gfn_range.start, slot->base_gfn);
> > + gfn_range.end = min(gfn_range.end, slot->base_gfn + slot->npages);
> > +
> > + if (gfn_range.start >= gfn_range.end)
> > + return;
> > +
> > + idx = srcu_read_lock(&kvm->srcu);
> > + KVM_MMU_LOCK(kvm);
> > + kvm_unmap_gfn_range(kvm, &gfn_range);
> > + kvm_flush_remote_tlbs(kvm);
> > + KVM_MMU_UNLOCK(kvm);
> > + srcu_read_unlock(&kvm->srcu, idx);
>
> Should this also invalidate gfn_to_pfn_cache mappings? Otherwise it seems
> possible the kernel might end up inadvertantly writing to now-private guest
> memory via a now-stale gfn_to_pfn_cache entry.
Yes. Ideally we'd get these flows to share common code and avoid these goofs.
I tried very briefly but they're just different enough to make it ugly.