Re: [PATCH] sched: wake-affine throttle

From: Michael Wang
Date: Fri May 03 2013 - 01:57:51 EST


Hi, Mike

Thanks for your reply.

On 05/03/2013 01:01 PM, Mike Galbraith wrote:
[snip]
>>
>> If this approach caused any concerns, please let me know ;-)
>
> I wonder if throttling on failure is the way to go. Note the minimal
> gain for pgbench with the default 1ms throttle interval. It's not very
> effective out of the box for the load type it's targeted to help, and
> people generally don't twiddle scheduler knobs. If you throttle on
> success, you directly restrict migration frequency without that being
> affected by what other tasks are doing. Seems that would be a bit more
> effective.

This is a good timing to make some conclusion for this problem ;-)

Let's suppose when wake-affine failed, next time slice got a higher
failure chance, then whether throttle on failure could be the question like:

throttle interval should cover more failure timing
or more success timing?

Obviously we should cover more failure timing, since it's just wasting
cycle and change nothing.

However, I used to concern about the damage of succeed wake-affine at
that rapid, sure it also contain the benefit, but which one is bigger?

Now if we look at the RFC version which throttle on succeed, for
pgbench, we could find that the default 1ms benefit is < 5%, while the
current version which throttle on failure bring 7% at most.

And that eliminate my concern :)

>
> (I still like the wakeup buddy thing, it's more effective because it
> adds and uses knowledge, though without the knob, cache domain size.
> Peter is right about the interrupt wakeups though, that could very
> easily cause regressions, dirt simple throttle is much safer).

Exactly, dark issue deserve dark solution, let darkness guide him...

Regards,
Michael Wang

>
> -Mike
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/