Re: [PATCH v2] sched/cpuset: distribute tasks within affinity masks
From: Qais Yousef
Date: Wed Mar 18 2020 - 07:35:03 EST
On 03/17/20 14:35, Josh Don wrote:
> On Wed, Mar 11, 2020 at 7:05 AM Qais Yousef <qais.yousef@xxxxxxx> wrote:
> >
> > This actually helps me fix a similar problem I faced in RT [1]. If multiple RT
> > tasks wakeup at the same time we get a 'thundering herd' issue where they all
> > end up going to the same CPU, just to be pushed out again.
> >
> > Beside this will help fix another problem for RT tasks fitness, which is
> > a manifestation of the problem above. If two tasks wake up at the same time and
> > they happen to run on a little cpu (but request to run on a big one), one of
> > them will end up being migrated because find_lowest_rq() will return the first
> > cpu in the mask for both tasks.
> >
> > I tested the API (not the change in sched/core.c) and it looks good to me.
>
> Nice, glad that the API already has another use case. Thanks for taking a look.
>
> > nit: cpumask_first_and() is better here?
>
> Yea, I would also prefer to use it, but the definition of
> cpumask_first_and() follows this section, as it itself uses
> cpumask_next_and().
>
> > It might be a good idea to split the API from the user too.
>
> Not sure what you mean by this, could you clarify?
I meant it'd be a good idea to split the cpumask API into its own patch and
have a separate patch for the user in sched/core.c. But that was a small nit.
If the user (in sched/core.c) somehow introduces a regression, reverting it
separately should be trivial.
Thanks
--
Qais Yousef
>
> On Tue, Mar 17, 2020 at 12:24 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > > Anyway, for the API.
> > >
> > > Reviewed-by: Qais Yousef <qais.yousef@xxxxxxx>
> > > Tested-by: Qais Yousef <qais.yousef@xxxxxxx>
> >
> > Thanks guys!
>
> Thanks Peter, any other comments or are you happy with merging this patch as-is?