On 14/11/17 10:38, Quan Xu wrote:
Why limit the solution to x86 if the more general solution isn't
On 2017/11/14 15:30, Juergen Gross wrote:
On 14/11/17 08:02, Quan Xu wrote:ÂÂÂÂÂ sorry, a typo.. /other archs/other hypervisors/
On 2017/11/13 18:53, Juergen Gross wrote:I don't question the general idea. I just think pvops isn't the best way
On 13/11/17 11:06, Quan Xu wrote:Juergen, Here is the data we get when running benchmark netperf:
From: Quan Xu <quan.xu0@xxxxxxxxx>Hmm, is the idle entry path really so critical to performance that a
So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
in idle path which will poll for a while before we enter the real idle
state.
In virtualization, idle path includes several heavy operations
includes timer access(LAPIC timer or TSC deadline timer) which will
hurt performance especially for latency intensive workload like
message
passing task. The cost is mainly from the vmexit which is a hardware
context switch between virtual machine and hypervisor. Our solution is
to poll for a while and do not enter real idle path if we can get the
schedule event during polling.
Poll may cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.
Signed-off-by: Yang Zhang <yang.zhang.wz@xxxxxxxxx>
Signed-off-by: Quan Xu <quan.xu0@xxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: Alok Kataria <akataria@xxxxxxxxxx>
Cc: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Cc: x86@xxxxxxxxxx
Cc: virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
new
pvops function is necessary?
ÂÂ1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
ÂÂÂÂ 29031.6 bit/s -- 76.1 %CPU
ÂÂ2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
ÂÂÂÂ 35787.7 bit/s -- 129.4 %CPU
ÂÂ3. w/ kvm dynamic poll:
ÂÂÂÂ 35735.6 bit/s -- 200.0 %CPU
ÂÂ4. w/patch and w/ kvm dynamic poll:
ÂÂÂÂ 42225.3 bit/s -- 198.7 %CPU
ÂÂ5. idle=poll
ÂÂÂÂ 37081.7 bit/s -- 998.1 %CPU
ÂÂw/ this patch, we will improve performance by 23%.. even we could
improve
ÂÂperformance by 45.4%, if we use w/patch and w/ kvm dynamic poll.
also the
ÂÂcost of CPU is much lower than 'idle=poll' case..
to implement it.
Wouldn't a function pointer, maybe guardedI assume this feature will be ported to other archs.. a new pvops makes
by a static key, be enough? A further advantage would be that this
would
work on other architectures, too.
ÂÂÂÂÂ it refers hypervisor like Xen, HyperV and VMware)..
yes, I'm aware..codeYou are aware that pvops is x86 only?
clean and easy to maintain. also I tried to add it into existed pvops,
but it
doesn't match.
I really don't see the big difference in maintainability compared to the
static key / function pointer variant:
void (*guest_idle_poll_func)(void);
struct static_key guest_idle_poll_key __read_mostly;
static inline void guest_idle_poll(void)
{
ÂÂÂÂif (static_key_false(&guest_idle_poll_key))
ÂÂÂÂÂÂÂ guest_idle_poll_func();
}
thank you for your sample code :)
I agree there is no big difference.. I think we are discussion for two
things:
Â1) x86 VM on different hypervisors
Â2) different archs VM on kvm hypervisor
What I want to do is x86 VM on different hypervisors, such as kvm / xen
/ hyperv ..
harder?
As you didn't give any reason why the pvops approach is better other
than you don't care for non-x86 platforms you won't get an "Ack" from
me for this patch.
Maybe. But this should work for Xen on ARM, too.And KVM would just need to set guest_idle_poll_func and enable the.. referred to 'pv_mmu_ops', HyperV and Xen can implement their own
static key. Works on non-x86 architectures, too.
functions for 'pv_mmu_ops'.
I think it is the same to pv_idle_ops.
with above explaination, do you still think I need to define the static
key/function pointer variant?
btw, any interest to port it to Xen HVM guest? :)
Juergen