Re: [RFC PATCH] sched/fair: Introduce SIS_PAIR to wakeup task on local idle core first
From: Mike Galbraith
Date: Wed May 17 2023 - 15:53:30 EST
On Thu, 2023-05-18 at 00:57 +0800, Chen Yu wrote:
> >
> I'm thinking of two directions based on current patch:
>
> 1. Check the task duration, if it is a high speed ping-pong pair, let the
> wakee search for an idle SMT sibling on current core.
>
> This strategy give the best overall performance improvement, but
> the short task duration tweak based on online CPU number would be
> an obstacle.
Duration is pretty useless, as it says nothing about concurrency.
Taking the 500us metric as an example, one pipe ping-pong can meet
that, and toss up to nearly 50% of throughput out the window if you
stack based only on duration.
> Or
>
> 2. Honors the idle core.
> That is to say, if there is an idle core in the system, choose that
> idle core first. Otherwise, fall back to searching for an idle smt
> sibling rather than choosing a idle CPU in a random half-busy core.
>
> This strategy could partially mitigate the C2C overhead, and not
> breaking the idle-core-first strategy. So I had a try on it, with
> above change, I did see some improvement when the system is around
> half busy(afterall, the idle_has_core has to be false):
If mitigation is the goal, and until the next iteration of socket
growth that's not a waste of effort, continuing to honor idle core is
the only option that has a ghost of a chance.
That said, I don't like the waker/wakee have met heuristic much either,
because tasks waking one another before can just as well mean they met
at a sleeping lock, it does not necessarily imply latency bound IPC.
I haven't met a heuristic I like, and that includes the ones I invent.
The smarter you try to make them, the more precious fast path cycles
they eat, and there's a never ending supply of holes in the damn things
that want plugging. A prime example was the SIS_CURRENT heuristic self
destructing in my box, rendering that patch a not quite free noop :)
-Mike