Re: [PATCH v9 RESEND 1/3] sched/fair: Co-locate cfs_rq and sched_entity in cfs_tg_state

From: Josh Don

Date: Wed Apr 01 2026 - 22:22:20 EST


On Tue, Mar 31, 2026 at 7:58 PM K Prateek Nayak <kprateek.nayak@xxxxxxx> wrote:
>
> Hello Josh,
>
> On 4/1/2026 1:05 AM, Josh Don wrote:
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 0a35a82e4792..79160d419c9f 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -13877,8 +13877,6 @@ void free_fair_sched_group(struct task_group *tg)
> >> for_each_possible_cpu(i) {
> >> if (tg->cfs_rq)
> >> kfree(tg->cfs_rq[i]);
> >> - if (tg->se)
> >> - kfree(tg->se[i]);
> >> }
> >
> > To be more explicit, suggest converting tg->cfs_rq[i] to an explicit
> > cfs_tg_state pointer via container_of, to not depend on cfs_rq being
> > the first member of the struct.
>
> I suggested the __no_randomize_layout trick since it is similar to what
> "struct sched_entity_stats" does and we save on checking if
> "tg->cfs_rq[i]" NULL before doing container_of() + kfree().
>
> I isn't any more obscure than how kfree(tg->se[i]) would have previously
> freed up the stats member too in one go but perhaps a comment on top
> can address your concerns?

This isn't a hot path that requires optimizing to that level, we're
talking about cgroup free here :)

The downside of the current approach is the brittleness of requiring
the field be the very first member of the struct +
no_randomize_layout. At minimum that needs a comment, but again I
struggle to see the justification for the complexity.

Up to Peter, but at minimum I think we need a comment on the struct if
we keep it as-is.

>
> --
> Thanks and Regards,
> Prateek
>