Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking
From: Peter Zijlstra
Date: Tue Oct 21 2014 - 10:56:59 EST
On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote:
> static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
> {
> - long tg_weight;
> -
> - /*
> - * Use this CPU's actual weight instead of the last load_contribution
> - * to gain a more accurate current total weight. See
> - * update_cfs_rq_load_contribution().
> - */
> - tg_weight = atomic_long_read(&tg->load_avg);
> - tg_weight -= cfs_rq->tg_load_contrib;
> - tg_weight += cfs_rq->load.weight;
> -
> - return tg_weight;
> + return atomic_long_read(&tg->load_avg);
Since you're now also delaying updating load_avg, why not retain this
slightly better approximation?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/