Re: [RFC PATCH v2 0/7] x86/idle: add halt poll support

From: Quan Xu
Date: Thu Sep 14 2017 - 04:36:39 EST




on 2017/9/13 19:56, Yang Zhang wrote:
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +0000, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.

But are we trading a lot of CPU for a bit of lower latency?

The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.

This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
don't need to goes through the heavy overhead path.

Isn't it the job of an idle driver to find the best way to
halt the CPU?

It looks like just by adding a cstate we can make it
halt at higher latencies only. And at lower latencies,
if it's doing a good job we can hopefully use mwait to
stop the CPU.

In fact I have been experimenting with exactly that.
Some initial results are encouraging but I could use help
with testing and especially tuning. If you can help
pls let me know!

Quan, Can you help to test it and give result? Thanks.


Hi, MST

I have tested the patch "intel_idle: add pv cstates when running on kvm"Â on a recent host that allows guests
to execute mwait without an exit. also I have tested our patch "[RFC PATCH v2 0/7] x86/idle: add halt poll support",
upstream linux, and idle=poll.

the following is the result (which seems better than ever berfore, as I ran test case on a more powerful machine):

for __netperf__, the first column is trans. rate per sec, the second column is CPU utilzation.

1. upstream linux

ÂÂÂÂÂ 28371.7 bits/s -- 76.6 %CPU

2. idle=poll

ÂÂÂÂÂ 34372 bit/s -- 999.3 %CPU

3. "[RFC PATCH v2 0/7] x86/idle: add halt poll support", with different values of parameter 'halt_poll_threshold':

ÂÂÂÂÂ 28362.7 bits/s -- 74.7Â %CPU (halt_poll_threshold=10000)
ÂÂÂÂÂ 32949.5 bits/s -- 82.5Â %CPU (halt_poll_threshold=20000)
ÂÂÂÂÂ 39717.9 bits/s -- 104.1 %CPU (halt_poll_threshold=30000)
ÂÂÂÂÂ 40137.9 bits/s -- 104.4 %CPU (halt_poll_threshold=40000)
ÂÂÂÂÂ 40079.8 bits/s -- 105.6 %CPU (halt_poll_threshold=50000)


4. "intel_idle: add pv cstates when running on kvm"

 33041.8 bits/s -- 999.4 %CPU





for __ctxsw__, the first column is the time per process context switches, the second column is CPU utilzation..

1. upstream linux

ÂÂÂÂÂ 3624.19 ns/ctxsw -- 191.9 %CPU

2. idle=poll

ÂÂÂÂÂ 3419.66 ns/ctxsw -- 999.2 %CPU

3. "[RFC PATCH v2 0/7] x86/idle: add halt poll support", with different values of parameter 'halt_poll_threshold':

ÂÂÂÂÂ 1123.40 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=10000)
ÂÂÂÂÂ 1127.38 ns/ctxsw -- 199.7 %CPU (halt_poll_threshold=20000)
ÂÂÂÂÂ 1113.58 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=30000)
ÂÂÂÂÂ 1117.12 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=40000)
ÂÂÂÂÂ 1121.62 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=50000)

Â4. "intel_idle: add pv cstates when running on kvm"

ÂÂÂÂÂ 3427.59 ns/ctxsw -- 999.4 %CPU

-Quan