On 18/08/2015 11:30, Avi Kivity wrote:
Yes, it's explained in the cover letter.KVM_USER_EXIT in practice should be so rare (at least with in-kernelAh, so the idea is to remove the cost of changing the signal mask?
LAPIC) that I don't think this matters. KVM_USER_EXIT is relatively
uninteresting, it only exists to provide an alternative to signals that
doesn't require expensive atomics on each and every KVM_RUN. :(
Yes, although it looks like a thread-local operation, it takes aIIRC the lock was only task-wide and uncontended. Problem is, it's on
process-wide lock.
the node that created the thread rather than the node that is running
it, and inter-node atomics are really, really slow.
For guests spanning >1 host NUMA nodes it's not really practical to
ensure that the thread is created on the right node. Even for guests
that fit into 1 host node, if you rely on AutoNUMA the VCPUs are created
too early for AutoNUMA to have any effect. And newer machines have
frighteningly small nodes (two nodes per socket, so it's something like
7 pCPUs if you don't have hyper-threading enabled). True, the NUMA
penalty within the same socket is not huge, but it still costs a few
thousand clock cycles on vmexit.flat and this feature sweeps it away
completely.
I expect most user wakeups are via irqfd, so indeed the performance ofYup, either irqfd or KVM_SET_SIGNAL_MSI.
KVM_USER_EXIT is uninteresting.
Paolo