On Tue, Sep 1, 2015 at 5:29 PM, Wanpeng Li <wanpeng.li@xxxxxxxxxxx> wrote:
On 9/2/15 7:24 AM, David Matlack wrote:<snip>
On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li <wanpeng.li@xxxxxxxxxxx> wrote:
That's fine. It's just easier to convey my ideas with a patch. FYI theWhy this can happen?Ah, probably because I'm missing 9c8fd1ba220 (KVM: x86: optimize delivery
of TSC deadline timer interrupt). I don't think the edge case exists in
the latest kernel.
Yeah, hope we both(include Peter Kieser) can test against latest kvm tree to
avoid confusing. The reason to introduce the adaptive halt-polling toggle is
to handle the "edge case" as you mentioned above. So I think we can make
more efforts improve v4 instead. I will improve v4 to handle short halt
today. ;-)
other reason for the toggle patch was to add the timer for kvm_vcpu_block,
which I think is the only way to get dynamic halt-polling right. Feel free
to work on top of v4!
<snip>
I'm not seeing the same results with v4. With a 250HZ ticking guestDid you test your patch against a windows guest?I have not. I tested against a 250HZ linux guest to check how it performs
against a ticking guest. Presumably, windows should be the same, but at a
higher tick rate. Do you have a test for Windows?
I just test the idle vCPUs usage.
V4 for windows 10:
+-----------------+----------------+-----------------------+
| | |
|
| w/o halt-poll | w/ halt-poll | dynamic(v4) halt-poll
|
+-----------------+----------------+-----------------------+
| | |
|
| ~2.1% | ~3.0% | ~2.4%
|
+-----------------+----------------+-----------------------+
I see 15% c0 with halt_poll_ns=2000000 and 1.27% with halt_poll_ns=0.
Are you running one vcpu per pcpu?
(The reason for the overhead: the new tracepoint shows each vcpu is
alternating between 0 and 500 us.)
V4 for linux guest:
+-----------------+----------------+-------------------+
| | | |
| w/o halt-poll | w/ halt-poll | dynamic halt-poll |
+-----------------+----------------+-------------------+
| | | |
| ~0.9% | ~1.8% | ~1.2% |
+-----------------+----------------+-------------------+
Regards,
Wanpeng Li