Re: [PATCH 2/8] KVM: pfncache: add a mark-dirty helper

From: Paul Durrant
Date: Thu Sep 14 2023 - 05:35:03 EST


On 14/09/2023 10:21, David Woodhouse wrote:
On Thu, 2023-09-14 at 08:49 +0000, Paul Durrant wrote:
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -430,14 +430,13 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
                smp_wmb();
        }
-       if (user_len2)
+       if (user_len2) {
+               kvm_gpc_mark_dirty(gpc2);
                read_unlock(&gpc2->lock);
+       }
+       kvm_gpc_mark_dirty(gpc1);
        read_unlock_irqrestore(&gpc1->lock, flags);
-
-       mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT);
-       if (user_len2)
-               mark_page_dirty_in_slot(v->kvm, gpc2->memslot, gpc2->gpa >> PAGE_SHIFT);
 }
 void kvm_xen_update_runstate(struct kvm_vcpu *v, int state)

ISTR there was a reason why the mark_page_dirty_in_slot() was called
*after* unlocking. Although now I say it, that seems wrong... is that
because the spinlock is only protecting the uHVA→kHVA mapping, while
the memslot/gpa are going to remain valid even after unlock, because
those are protected by sRCU?

Without the lock you could see an inconsistent GPA and memslot so I think you could theoretically calculate a bogus rel_gfn and walk off the end of the dirty bitmap. Hence moving the call inside the lock while I was in the neighbourhood seemed like a good idea. I could call it out in the commit comment if you'd like.

Paul