Re: [PATCH v2] sched/fair: Sync se's load_avg with cfs_rq in reweight_task

From: Qais Yousef
Date: Sun Jul 28 2024 - 16:14:42 EST


On 07/22/24 10:47, K Prateek Nayak wrote:
> (+ Qais)
>
> Hello Chuyi,
>
> On 7/20/2024 10:42 AM, Chuyi Zhou wrote:
> > In reweight_task(), there are two situations:
> >
> > 1. The task was on_rq, then the task's load_avg is accurate because we
> > synchronized it with cfs_rq through update_load_avg() in dequeue_task().
> >
> > 2. The task is sleeping, its load_avg might not have been updated for some
> > time, which can result in inaccurate dequeue_load_avg() in
> > reweight_entity().
> >
> > This patch solves this by using update_load_avg() to synchronize the
> > load_avg of se with cfs_rq. For tasks were on_rq, since we already update
> > load_avg to accurate values in dequeue_task(), this change will not have
> > other effects due to the short time interval between the two updates.
> >
> > Signed-off-by: Chuyi Zhou <zhouchuyi@xxxxxxxxxxxxx>
> > ---
> > Changes in v2:
> > - change the description in commit log.
> > - use update_load_avg() in reweight_task() rather than in reweight_entity
> > suggested by chengming.
> > - Link to v1: https://lore.kernel.org/lkml/20240716150840.23061-1-zhouchuyi@xxxxxxxxxxxxx/
> > ---
> > kernel/sched/fair.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 9057584ec06d..b1e07ce90284 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3835,12 +3835,15 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
> > }
> > }
> > +static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags);
> > +
> > void reweight_task(struct task_struct *p, const struct load_weight *lw)
> > {
> > struct sched_entity *se = &p->se;
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > struct load_weight *load = &se->load;
> > + update_load_avg(cfs_rq, se, 0);

White space and a comment perhaps?

LGTM anyway.