On Mon, May 13, 2019 at 07:07:36PM -0700, Andy Lutomirski wrote:ÂI've been experimenting with IPI'ing siblings on vmexit, primarily because we know we'll need it if ASI turns out to be viable, but also because I wanted to understand why previous experiments resulted in such poor performance.
On Mon, May 13, 2019 at 2:09 PM Liran Alon <liran.alon@xxxxxxxxxx> wrote:Yeah, doing the full synchronous thing from NMI/MCE context sounds
The hope is that the very vast majority of #VMExit handlers will beWhat exactly does "kick" mean in this context? It sounds like you're
able to completely run without requiring to switch to full address
space. Therefore, avoiding the performance hit of (2).
However, for the very few #VMExits that does require to run in full
kernel address space, we must first kick the sibling hyperthread
outside of guest and only then switch to full kernel address space
and only once all hyperthreads return to KVM address space, then
allow then to enter into guest.
going to need to be able to kick sibling VMs from extremely atomic
contexts like NMI and MCE.
exceedingly dodgy, howver..
Realistically they only need to send an IPI to the other sibling; they
don't need to wait for the VMExit to complete or anything else.
And that is something we can do from NMI context -- with a bit of care.
See also arch_irq_work_raise(); specifically we need to ensure we leave
the APIC in an idle state, such that if we interrupted an APIC sequence
it will not suddenly fail/violate the APIC write/state etc.