Re: [PATCH RESEND v2 08/17] KVM: X86: Implement ring-based dirty memory tracking
From: Peter Xu
Date: Wed Jan 08 2020 - 10:52:18 EST
On Fri, Dec 20, 2019 at 08:49:29PM -0500, Peter Xu wrote:
> +int kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset)
> +{
> + struct kvm_dirty_gfn *entry;
> + struct kvm_dirty_ring_indices *indices = ring->indices;
> +
> + /*
> + * Note: here we will start waiting even soft full, because we
> + * can't risk making it completely full, since vcpu0 could use
> + * it right after us and if vcpu0 context gets full it could
> + * deadlock if wait with mmu_lock held.
> + */
> + if (kvm_get_running_vcpu() == NULL &&
> + kvm_dirty_ring_soft_full(ring))
> + return -EBUSY;
I plan to repost next week, but before that I'd like to know whether
there's any further (negative) feedback from design-wise, especially
here, which is still a bit tricky to makeup the kvmgt issue.
Now we still have the waitqueue but it'll only be used for
no-vcpu-context dirtyings, so:
- For no-vcpu-context: thread could wait in the waitqueue if it makes
vcpu0's ring soft-full (note, previously it was hard-full, so here
we make it easier to wait so we make sure )
- For with-vcpu-context: we should never wait, guaranteed by the fact
that KVM_RUN will return now if soft-full for that vcpu ring, and
above waitqueue will make sure even vcpu0's waitqueue won't be
filled up by kvmgt
Again this is still a workaround for kvmgt and I think it should not
be needed after the refactoring. It's just a way to not depend on
that work so this should work even with current kvmgt.
> +
> + /* It will never gets completely full when with a vcpu context */
> + WARN_ON_ONCE(kvm_dirty_ring_full(ring));
> +
> + entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)];
> + entry->slot = slot;
> + entry->offset = offset;
> + smp_wmb();
> + ring->dirty_index++;
> + WRITE_ONCE(indices->avail_index, ring->dirty_index);
> +
> + trace_kvm_dirty_ring_push(ring, slot, offset);
> +
> + return 0;
> +}
--
Peter Xu