Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking

From: Peter Xu
Date: Fri Dec 20 2019 - 13:19:22 EST


On Fri, Dec 13, 2019 at 03:23:24PM -0500, Peter Xu wrote:
> > > +If one of the ring buffers is full, the guest will exit to userspace
> > > +with the exit reason set to KVM_EXIT_DIRTY_LOG_FULL, and the
> > > +KVM_RUN ioctl will return -EINTR. Once that happens, userspace
> > > +should pause all the vcpus, then harvest all the dirty pages and
> > > +rearm the dirty traps. It can unpause the guest after that.
> >
> > Except for the condition above, why is it necessary to pause other VCPUs
> > than the one being harvested?
>
> This is a good question. Paolo could correct me if I'm wrong.
>
> Firstly I think this should rarely happen if the userspace is
> collecting the dirty bits from time to time. If it happens, we'll
> need to call KVM_RESET_DIRTY_RINGS to reset all the rings. Then the
> question actually becomes to: Whether we'd like to have per-vcpu
> KVM_RESET_DIRTY_RINGS?

Hmm when I'm rethinking this, I could have errornously deduced
something from Christophe's question. Christophe was asking about why
kicking other vcpus, while it does not mean that the RESET will need
to do per-vcpu.

So now I tend to agree here with Christophe that I can't find a reason
why we need to kick all vcpus out. Even if we need to do tlb flushing
for all vcpus when RESET, we can simply collect all the rings before
sending the RESET, then it's not really a reason to explicitly kick
them from userspace. So I plan to remove this sentence in the next
version (which is only a document update).

--
Peter Xu