Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a vm

From: Sean Christopherson
Date: Mon Oct 24 2022 - 20:22:22 EST


On Mon, Oct 24, 2022, Christian Borntraeger wrote:
> Am 24.10.22 um 10:33 schrieb Emanuele Giuseppe Esposito:
> > Am 24/10/2022 um 09:56 schrieb Christian Borntraeger:
> > > > Therefore the simplest solution is to pause all vcpus in the kvm
> > > > side, so that:

Simplest for QEMU maybe, most definitely not simplest for KVM.

> > > > - userspace just needs to call the new API before making memslots
> > > > changes, keeping modifications to the minimum
> > > > - dirty page updates are also performed when vcpus are blocked, so
> > > > there is no time window between the dirty page ioctl and memslots
> > > > modifications, since vcpus are all stopped.
> > > > - no need to modify the existing memslots API
> > > Isnt QEMU able to achieve the same goal today by forcing all vCPUs
> > > into userspace with a signal? Can you provide some rationale why this
> > > is better in the cover letter or patch description?
> > >
> > David Hildenbrand tried to propose something similar here:
> > https://github.com/davidhildenbrand/qemu/commit/86b1bf546a8d00908e33f7362b0b61e2be8dbb7a
> >
> > While it is not optimized, I think it's more complex that the current
> > serie, since qemu should also make sure all running ioctls finish and
> > prevent the new ones from getting executed.
> >
> > Also we can't use pause_all_vcpus()/resume_all_vcpus() because they drop
> > the BQL.
> >
> > Would that be ok as rationale?
>
> Yes that helps and should be part of the cover letter for the next iterations.

But that doesn't explain why KVM needs to get involved, it only explains why QEMU
can't use its existing pause_all_vcpus(). I do not understand why this is a
problem QEMU needs KVM's help to solve.