Re: [PATCH v3 1/4] KVM: Recover IRTE to remapped mode if the interrupt is not single-destination
From: rkrcmar@xxxxxxxxxx
Date: Tue Jan 26 2016 - 13:23:12 EST
2016-01-26 09:44+0800, Yang Zhang:
> On 2016/1/25 21:59, rkrcmar@xxxxxxxxxx wrote:
>>2016-01-25 09:49+0800, Yang Zhang:
>>>On 2016/1/22 21:31, rkrcmar@xxxxxxxxxx wrote:
>>>>2016-01-22 10:03+0800, Yang Zhang:
>>>>>Not so complicated. We can reuse the wake up vector and check whether the
>>>>>interrupt is multicast when one of destination vcpu handles it.
>>>>
>>>>I'm not sure what you mean now ... I guess it is:
>>>>- Deliver the interrupt to a guest VCPU and relay the multicast to other
>>>> VCPUs. No, it's strictly worse than intercepting it in the host.
>>>
>>>It is still handled in host context not guest context. The wakeup event
>>>cannot be consumed like posted event.
>>
>>Ok. ("when one of destination vcpu handles it" confused me into
>>thinking that you'd like to handle it with the notification vector.)
>
> Sorry for my poor english. :(
It's good. Ambiguity is hard to avoid if a reader doesn't want to
assume only the most likely meaning.
>>>> Also, if wakeup vector were used for wakeup and multicast, we'd be
>>>> uselessly doing work, because we can't tell which reason triggered the
>>>> interrupt before finishing one part -- using separate vectors for that
>>>> would be a bit nicer.
>>
>>(imprecise -- we would always have to check for ON bit of all PIDs from
>> blocked VCPUs, for the original meaning of wakeup vector, and always
>
> This is what KVM does currently.
Yep.
>> either read the PIRR or check for ON bit of all PIDs that encode
>> multicast interrupts; then we have to clear ON bits for multicasts.)
>
> Also, most part of work is covered by current logic except checking the
> multicast.
We could reuse the setup that gets us to wakeup_handler, but there is
nothing to share in the handler itself. Sharing a handler means that we
always have to execute both parts.
We must create new PID anyway and compared to the extra work needed for
multicast handling, a new vector + handler is a relatively small code
investment that adds clarity to the design (and performance).
(Taking the vector splitting to the extreme, we'd improve performance if
we added a vector per assigned device. That is practically the same as
non-posted mode, just more complicated.)
>>---
>>There might be a benefit of using posted interrupts for host interrupts
>>when we run out of free interrupt vectors: we could start using vectors
>>by multiple sources through posted interrupts, if using posted
>
> Do you mean per vcpu posted interrupts?
I mean using posting for host device interrupts (no virt involved).
Let's say we have 300 devices for one CPU and CPU has 200 useable
vectors. We have 100 device interrupts that need to be shared in some
vectors and using posting might be faster than directly checking
multiple devices.
(I couldn't come up with a plausible scenario where we might want to use
posting for host interrupts.)