Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

From: Vitaly Kuznetsov
Date: Thu Nov 07 2019 - 08:26:59 EST


Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> writes:

> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL

Hi Sasha,

do you have plans to pick this up for hyperv-next or should we ask x86
folks to?

Thanks!

--
Vitaly