Re: [RFC PATCH 0/9] kvm: implement atomic memslot updates

From: David Hildenbrand
Date: Thu Sep 29 2022 - 04:24:48 EST


On 29.09.22 10:05, Emanuele Giuseppe Esposito wrote:


Am 28/09/2022 um 22:41 schrieb Sean Christopherson:
On Wed, Sep 28, 2022, Paolo Bonzini wrote:
On 9/28/22 17:58, Sean Christopherson wrote:
I don't disagree that the memslots API is lacking, but IMO that is somewhat
orthogonal to fixing KVM x86's "code fetch to MMIO" mess. Such a massive new API
should be viewed and prioritized as a new feature, not as a bug fix, e.g. I'd
like to have the luxury of being able to explore ideas beyond "let userspace
batch memslot updates", and I really don't want to feel pressured to get this
code reviewed and merge.

I absolutely agree that this is not a bugfix. Most new features for KVM can
be seen as bug fixes if you squint hard enough, but they're still features.

I guess I'm complaining that there isn't sufficient justification for this new
feature. The cover letter provides a bug that would be fixed by having batched
updates, but as above, that's really due to deficiencies in a different KVM ABI.

Beyond that, there's no explanation of why this exact API is necessary, i.e. there
are no requirements given.

- Why can't this be solved in userspace?

Because this would provide the "feature" only to QEMU, leaving each
other hypervisor to implement its own.

In addition (maybe you already answered this question but I couldn't
find an answer in the email thread), does it make sense to stop all
vcpus for a couple of memslot update? What if we have 100 cpus?


- Is performance a concern? I.e. are updates that need to be batched going to
be high frequency operations?

Currently they are limited to run only at boot. In an unmodified
KVM/QEMU build, however, I count 86 memslot updates done at boot with

./qemu-system-x86_64 --overcommit cpu-pm=on --smp $v --accel kvm
--display none

I *think* there are only ~3 problematic ones (split/resize), where we temporarily delete something we will re-add. At least that's what I remember from working on my prototype.

--
Thanks,

David / dhildenb