Re: [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
From: Juergen Gross
Date: Tue Nov 14 2017 - 02:30:54 EST
On 14/11/17 08:02, Quan Xu wrote:
>
>
> On 2017/11/13 18:53, Juergen Gross wrote:
>> On 13/11/17 11:06, Quan Xu wrote:
>>> From: Quan Xu <quan.xu0@xxxxxxxxx>
>>>
>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
>>> in idle path which will poll for a while before we enter the real idle
>>> state.
>>>
>>> In virtualization, idle path includes several heavy operations
>>> includes timer access(LAPIC timer or TSC deadline timer) which will
>>> hurt performance especially for latency intensive workload like message
>>> passing task. The cost is mainly from the vmexit which is a hardware
>>> context switch between virtual machine and hypervisor. Our solution is
>>> to poll for a while and do not enter real idle path if we can get the
>>> schedule event during polling.
>>>
>>> Poll may cause the CPU waste so we adopt a smart polling mechanism to
>>> reduce the useless poll.
>>>
>>> Signed-off-by: Yang Zhang <yang.zhang.wz@xxxxxxxxx>
>>> Signed-off-by: Quan Xu <quan.xu0@xxxxxxxxx>
>>> Cc: Juergen Gross <jgross@xxxxxxxx>
>>> Cc: Alok Kataria <akataria@xxxxxxxxxx>
>>> Cc: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
>>> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>>> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
>>> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
>>> Cc: x86@xxxxxxxxxx
>>> Cc: virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
>>> Cc: linux-kernel@xxxxxxxxxxxxxxx
>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
>> Hmm, is the idle entry path really so critical to performance that a new
>> pvops function is necessary?
> Juergen, Here is the data we get when running benchmark netperf:
> Â1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
> ÂÂÂ 29031.6 bit/s -- 76.1 %CPU
>
> Â2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
> ÂÂÂ 35787.7 bit/s -- 129.4 %CPU
>
> Â3. w/ kvm dynamic poll:
> ÂÂÂ 35735.6 bit/s -- 200.0 %CPU
>
> Â4. w/patch and w/ kvm dynamic poll:
> ÂÂÂ 42225.3 bit/s -- 198.7 %CPU
>
> Â5. idle=poll
> ÂÂÂ 37081.7 bit/s -- 998.1 %CPU
>
>
>
> Âw/ this patch, we will improve performance by 23%.. even we could improve
> Âperformance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the
> Âcost of CPU is much lower than 'idle=poll' case..
I don't question the general idea. I just think pvops isn't the best way
to implement it.
>> Wouldn't a function pointer, maybe guarded
>> by a static key, be enough? A further advantage would be that this would
>> work on other architectures, too.
>
> I assume this feature will be ported to other archs.. a new pvops makes
> code
> clean and easy to maintain. also I tried to add it into existed pvops,
> but it
> doesn't match.
You are aware that pvops is x86 only?
I really don't see the big difference in maintainability compared to the
static key / function pointer variant:
void (*guest_idle_poll_func)(void);
struct static_key guest_idle_poll_key __read_mostly;
static inline void guest_idle_poll(void)
{
if (static_key_false(&guest_idle_poll_key))
guest_idle_poll_func();
}
And KVM would just need to set guest_idle_poll_func and enable the
static key. Works on non-x86 architectures, too.
Juergen