Re: [PATCH] sched/debug: Show intergroup and hierarchy sum wait time of a task group

From: çèé
Date: Sun Feb 10 2019 - 21:44:35 EST


Hi Peter
> The problem I have with this is that it will make schedstats even more
expensive :/

I think the overhead for accounting hierarchy wait time is just the
same as cpuacct.usage. If the performance overhead is low enough(<
1%), is it acceptable?

Thanks
Yuzhoujian

Peter Zijlstra <peterz@xxxxxxxxxxxxx> ä2019å2æ7æåå äå1:19åéï
>
> On Wed, Jan 23, 2019 at 05:46:56PM +0800, ufo19890607@xxxxxxxxx wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e2ff4b6..35e89ca 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -858,6 +858,19 @@ static void update_curr_fair(struct rq *rq)
> > }
> >
> > static inline void
> > +update_hierarchy_wait_sum(struct sched_entity *se,
> > + u64 delta_wait)
> > +{
> > + for_each_sched_entity(se) {
> > + struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > +
> > + if (cfs_rq->tg != &root_task_group)
> > + __schedstat_add(cfs_rq->hierarchy_wait_sum,
> > + delta_wait);
> > + }
> > +}
> > +
> > +static inline void
> > update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > {
> > struct task_struct *p;
> > @@ -880,6 +893,7 @@ static void update_curr_fair(struct rq *rq)
> > return;
> > }
> > trace_sched_stat_wait(p, delta);
> > + update_hierarchy_wait_sum(se, delta);
> > }
> >
> > __schedstat_set(se->statistics.wait_max,
>
> The problem I have with this is that it will make schedstats even more
> expensive :/