Re: [PATCH] sched: introduce configurable delay before entering idle

From: Marcelo Tosatti
Date: Wed May 15 2019 - 16:28:40 EST


On Wed, May 15, 2019 at 09:42:48AM +0800, Wanpeng Li wrote:
> On Wed, 15 May 2019 at 02:20, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> >
> > On Tue, May 14, 2019 at 11:20:15AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Tue, May 14, 2019 at 10:50:23AM -0300, Marcelo Tosatti wrote:
> > > > On Mon, May 13, 2019 at 05:20:37PM +0800, Wanpeng Li wrote:
> > > > > On Wed, 8 May 2019 at 02:57, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> > > > > >
> > > > > >
> > > > > > Certain workloads perform poorly on KVM compared to baremetal
> > > > > > due to baremetal's ability to perform mwait on NEED_RESCHED
> > > > > > bit of task flags (therefore skipping the IPI).
> > > > >
> > > > > KVM supports expose mwait to the guest, if it can solve this?
> > > > >
> > > > > Regards,
> > > > > Wanpeng Li
> > > >
> > > > Unfortunately mwait in guest is not feasible (uncompatible with multiple
> > > > guests). Checking whether a paravirt solution is possible.
> > >
> > > There is the obvious problem with that the guest can be malicious and
> > > provide via the paravirt solution bogus data. That is it expose 0% CPU
> > > usage but in reality be mining and using 100%.
> >
> > The idea is to have a hypercall for the guest to perform the
> > need_resched=1 bit set. It can only hurt itself.
>
> This lets me recall the patchset from aliyun
> https://lkml.org/lkml/2017/6/22/296

Thanks for the pointer.

"The background is that we(Alibaba Cloud) do get more and more
complaints from our customers in both KVM and Xen compare to bare-mental.
After investigations, the root cause is known to us: big cost in message
passing workload(David show it in KVM forum 2015)

A typical message workload like below:
vcpu 0 vcpu 1
1. send ipi 2. doing hlt
3. go into idle 4. receive ipi and wake up from hlt
5. write APIC time twice 6. write APIC time twice to
to stop sched timer reprogram sched timer
7. doing hlt 8. handle task and send ipi to
vcpu 0
9. same to 4. 10. same to 3"

This is very similar to the client/server example pair
included in the first message.


> They poll after
> __current_set_polling() in do_idle() so avoid this hypercall I think.

Yes, i was thinking about a variant without poll.

> Btw, do you get SAP HANA by 5-10% bonus even if adaptive halt-polling
> is enabled?

host = 31.18
halt_poll_ns set to 200000 = 38.55 (80%)
halt_poll_ns set to 300000 = 33.28 (93%)
idle_spin set to 220000 = 32.22 (96%)

So avoiding the IPI VM-exits is faster.

300000 is the optimal value vfor this workload. Haven't checked
adaptive halt-polling.