Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking

From: Peter Zijlstra
Date: Tue Oct 21 2014 - 10:32:32 EST


On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote:
> +static __always_inline u64 decay_load(u64 val, u64 n)
> +{
> + if (likely(val <= UINT_MAX))
> + return decay_load32(val, n);
> +
> + return mul_u64_u32_shr(val, decay_load32(1 << 15, n), 15);
> +}

This still doesn't make any sense, why not put that mul_u64_u32_shr()
into decay_load() and be done with it?

---
kernel/sched/fair.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b78280c59b46..67c08d4a3df8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2231,9 +2231,8 @@ static __always_inline u64 decay_load(u64 val, u64 n)
local_n %= LOAD_AVG_PERIOD;
}

- val *= runnable_avg_yN_inv[local_n];
- /* We don't use SRR here since we always want to round down. */
- return val >> 32;
+ val = mul_u64_u32_shr(val, runnable_avg_yN_inv[local_n], 32);
+ return val;
}

/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/