Re: [patch v5 06/15] sched: log the cpu utilization at rq
From: Peter Zijlstra
Date: Wed Feb 20 2013 - 04:31:28 EST
On Mon, 2013-02-18 at 13:07 +0800, Alex Shi wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fcdb21f..b9a34ab 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
>
> static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
> {
> + u32 period;
> __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable);
> __update_tg_runnable_avg(&rq->avg, &rq->cfs);
> +
> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
> + rq->util = rq->avg.runnable_avg_sum * 100 / period;
> }
>
> /* Add the load generated by se into cfs_rq's child load-average */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 7a19792..ac1e107 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
>
> #endif /* CONFIG_SMP */
>
> +/* the percentage full cpu utilization */
> +#define FULL_UTIL 100
There's generally a better value than 100 when using computers.. seeing
how 100 is 64+32+4.
> +
> /*
> * This is the main, per-CPU runqueue data structure.
> *
> @@ -481,6 +484,7 @@ struct rq {
> #endif
>
> struct sched_avg avg;
> + unsigned int util;
> };
>
> static inline int cpu_of(struct rq *rq)
You don't actually compute the rq utilization, you only compute the
utilization as per the fair class, so if there's significant RT activity
it'll think the cpu is under-utilized, whihc I think will result in the
wrong thing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/