Re: [RFC 2/2] KVM: VMX: Enable bus lock VM exit

From: Xiaoyao Li
Date: Thu Jul 02 2020 - 05:15:42 EST


On 7/1/2020 10:49 PM, Vitaly Kuznetsov wrote:
Xiaoyao Li <xiaoyao.li@xxxxxxxxx> writes:

On 7/1/2020 8:44 PM, Vitaly Kuznetsov wrote:
Xiaoyao Li <xiaoyao.li@xxxxxxxxx> writes:

On 7/1/2020 5:04 PM, Vitaly Kuznetsov wrote:
Chenyi Qiang <chenyi.qiang@xxxxxxxxx> writes:
[...]
static const int kvm_vmx_max_exit_handlers =
@@ -6830,6 +6838,13 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
if (unlikely(vmx->exit_reason.failed_vmentry))
return EXIT_FASTPATH_NONE;
+ /*
+ * check the exit_reason to see if there is a bus lock
+ * happened in guest.
+ */
+ if (vmx->exit_reason.bus_lock_detected)
+ handle_bus_lock(vcpu);

In case the ultimate goal is to have an exit to userspace on bus lock,

I don't think we will need an exit to userspace on bus lock. See below.

the two ways to reach handle_bus_lock() are very different: in case
we're handling EXIT_REASON_BUS_LOCK we can easily drop to userspace by
returning 0 but what are we going to do in case of
exit_reason.bus_lock_detected? The 'higher priority VM exit' may require
exit to userspace too. So what's the plan? Maybe we can ignore the case
when we're exiting to userspace for some other reason as this is slow
already and force the exit otherwise?

And should we actually introduce
the KVM_EXIT_BUS_LOCK and a capability to enable it here?


Introducing KVM_EXIT_BUS_LOCK maybe help nothing. No matter
EXIT_REASON_BUS_LOCK or exit_reason.bus_lock_detected, the bus lock has
already happened. Exit to userspace cannot prevent bus lock, so what
userspace can do is recording and counting as what this patch does in
vcpu->stat.bus_locks.

Exiting to userspace would allow to implement custom 'throttling'
policies to mitigate the 'noisy neighbour' problem. The simplest would
be to just inject some sleep time.


So you want an exit to userspace for every bus lock and leave it all to
userspace. Yes, it's doable.


In some cases we may not even want to have a VM exit: think
e.g. real-time/partitioning case when even in case of bus lock we may
not want to add additional latency just to count such events.

For real-time case, the bus lock needs to be avoided at all, since bus lock takes many cycles and prevents others accessing memory. If no bus lock, then no bus lock vm exit to worry about. If bus lock, the latency requirement maybe cannot be met due to it.

I'd
suggest we make the new capability tri-state:
- disabled (no vmexit, default)
- stats only (what this patch does)
- userspace exit
But maybe this is an overkill, I'd like to hear what others think.

Yeah. Others' thought is very welcomed.

Besides, for how to throttle, KVM maybe has to take kernel policy into account. Since in the spec, there is another feature for bare metal to raise a #DB for bus lock. Native kernel likely implements some policy to restrict the rate the bus lock can happen. So qemu threads will have to follow that as well.

As you said, the exit_reason.bus_lock_detected case is the tricky one.
We cannot do the similar to extend vcpu->run->exit_reason, this breaks
ABI. Maybe we can extend the vcpu->run->flags to indicate bus lock
detected for the other exit reason?

This is likely the easiest solution.