Re: [PATCH v2] sched/cpuset: distribute tasks within affinity masks

From: Peter Zijlstra
Date: Tue Mar 17 2020 - 15:24:28 EST


On Wed, Mar 11, 2020 at 02:05:33PM +0000, Qais Yousef wrote:
> On 03/10/20 18:01, Josh Don wrote:
> > From: Paul Turner <pjt@xxxxxxxxxx>
> >
> > Currently, when updating the affinity of tasks via either cpusets.cpus,
> > or, sched_setaffinity(); tasks not currently running within the newly
> > specified mask will be arbitrarily assigned to the first CPU within the
> > mask.
> >
> > This (particularly in the case that we are restricting masks) can
> > result in many tasks being assigned to the first CPUs of their new
> > masks.
> >
> > This:
> > 1) Can induce scheduling delays while the load-balancer has a chance to
> > spread them between their new CPUs.
> > 2) Can antogonize a poor load-balancer behavior where it has a
> > difficult time recognizing that a cross-socket imbalance has been
> > forced by an affinity mask.
> >
> > This change adds a new cpumask interface to allow iterated calls to
> > distribute within the intersection of the provided masks.
> >
> > The cases that this mainly affects are:
> > - modifying cpuset.cpus
> > - when tasks join a cpuset
> > - when modifying a task's affinity via sched_setaffinity(2)
> >
> > Co-developed-by: Josh Don <joshdon@xxxxxxxxxx>
> > Signed-off-by: Josh Don <joshdon@xxxxxxxxxx>
> > Signed-off-by: Paul Turner <pjt@xxxxxxxxxx>

> Anyway, for the API.
>
> Reviewed-by: Qais Yousef <qais.yousef@xxxxxxx>
> Tested-by: Qais Yousef <qais.yousef@xxxxxxx>

Thanks guys!