Re: [PATCH] sched/fair: Fix stale comment referencing update_cfs_shares()
From: Christian Loehle
Date: Wed Apr 08 2026 - 05:03:30 EST
On 4/2/26 04:13, Zhan Xusheng wrote:
> update_cfs_shares() has been renamed to update_cfs_group(),
> but some comments still refer to the old function name.
>
> Update these comments to reflect the current code and avoid confusion.
>
> Signed-off-by: Zhan Xusheng <zhanxusheng@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bf948db905ed..172194919c33 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4155,7 +4155,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> * differential update where we store the last value we propagated. This in
> * turn allows skipping updates if the differential is 'small'.
> *
> - * Updating tg's load_avg is necessary before update_cfs_share().
> + * Updating tg's load_avg is necessary before update_cfs_group().
> */
> static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
> {
> @@ -4615,7 +4615,7 @@ static void migrate_se_pelt_lag(struct sched_entity *se) {}
> * The cfs_rq avg is the direct sum of all its entities (blocked and runnable)
> * avg. The immediate corollary is that all (fair) tasks must be attached.
> *
> - * cfs_rq->avg is used for task_h_load() and update_cfs_share() for example.
> + * cfs_rq->avg is used for task_h_load() and update_cfs_group() for example.
> *
> * Return: true if the load decayed or we removed load.
> *
Should mention
1ea6c46a23f1 ("sched/fair: Propagate an effective runnable_load_avg")
apart from that:
Reviewed-by: Christian Loehle <christian.loehle@xxxxxxx>