Re: [PATCH] sched: wake-affine throttle
From: Mike Galbraith
Date: Fri May 03 2013 - 02:14:53 EST
On Fri, 2013-05-03 at 13:57 +0800, Michael Wang wrote:
> Hi, Mike
>
> Thanks for your reply.
>
> On 05/03/2013 01:01 PM, Mike Galbraith wrote:
> [snip]
> >>
> >> If this approach caused any concerns, please let me know ;-)
> >
> > I wonder if throttling on failure is the way to go. Note the minimal
> > gain for pgbench with the default 1ms throttle interval. It's not very
> > effective out of the box for the load type it's targeted to help, and
> > people generally don't twiddle scheduler knobs. If you throttle on
> > success, you directly restrict migration frequency without that being
> > affected by what other tasks are doing. Seems that would be a bit more
> > effective.
>
> This is a good timing to make some conclusion for this problem ;-)
>
> Let's suppose when wake-affine failed, next time slice got a higher
> failure chance, then whether throttle on failure could be the question like:
>
> throttle interval should cover more failure timing
> or more success timing?
>
> Obviously we should cover more failure timing, since it's just wasting
> cycle and change nothing.
>
> However, I used to concern about the damage of succeed wake-affine at
> that rapid, sure it also contain the benefit, but which one is bigger?
>
> Now if we look at the RFC version which throttle on succeed, for
> pgbench, we could find that the default 1ms benefit is < 5%, while the
> current version which throttle on failure bring 7% at most.
OK, so scratch that thought. Would still be good to find a dirt simple
dirt cheap way to increase effectiveness a bit, and eliminate the knob.
Until a better idea comes along, this helps pgbench some, and will also
help fast movers ala tbench on AMD, where select_idle_sibling() wasn't
particularly wonderful per my measurements.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/