Re: [PATCH v2] sched: Optimize __calc_delta.

From: Nick Desaulniers
Date: Thu Mar 04 2021 - 12:36:36 EST


On Wed, Mar 3, 2021 at 2:48 PM Josh Don <joshdon@xxxxxxxxxx> wrote:
>
> From: Clement Courbet <courbet@xxxxxxxxxx>
>
> A significant portion of __calc_delta time is spent in the loop
> shifting a u64 by 32 bits. Use `fls` instead of iterating.
>
> This is ~7x faster on benchmarks.
>
> The generic `fls` implementation (`generic_fls`) is still ~4x faster
> than the loop.
> Architectures that have a better implementation will make use of it. For
> example, on X86 we get an additional factor 2 in speed without dedicated
> implementation.
>
> On gcc, the asm versions of `fls` are about the same speed as the
> builtin. On clang, the versions that use fls are more than twice as
> slow as the builtin. This is because the way the `fls` function is
> written, clang puts the value in memory:
> https://godbolt.org/z/EfMbYe. This bug is filed at
> https://bugs.llvm.org/show_bug.cgi?id=49406.

Hi Josh, Thanks for helping get this patch across the finish line.
Would you mind updating the commit message to point to
https://bugs.llvm.org/show_bug.cgi?id=20197?

>
> ```
> name cpu/op
> BM_Calc<__calc_delta_loop> 9.57ms ±12%
> BM_Calc<__calc_delta_generic_fls> 2.36ms ±13%
> BM_Calc<__calc_delta_asm_fls> 2.45ms ±13%
> BM_Calc<__calc_delta_asm_fls_nomem> 1.66ms ±12%
> BM_Calc<__calc_delta_asm_fls64> 2.46ms ±13%
> BM_Calc<__calc_delta_asm_fls64_nomem> 1.34ms ±15%
> BM_Calc<__calc_delta_builtin> 1.32ms ±11%
> ```
>
> Signed-off-by: Clement Courbet <courbet@xxxxxxxxxx>
> Signed-off-by: Josh Don <joshdon@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 19 +++++++++++--------
> kernel/sched/sched.h | 1 +
> 2 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b13634..a691371960ae 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> {
> u64 fact = scale_load_down(weight);
> + u32 fact_hi = (u32)(fact >> 32);
> int shift = WMULT_SHIFT;
> + int fs;
>
> __update_inv_weight(lw);
>
> - if (unlikely(fact >> 32)) {
> - while (fact >> 32) {
> - fact >>= 1;
> - shift--;
> - }
> + if (unlikely(fact_hi)) {
> + fs = fls(fact_hi);
> + shift -= fs;
> + fact >>= fs;
> }
>
> fact = mul_u32_u32(fact, lw->inv_weight);
>
> - while (fact >> 32) {
> - fact >>= 1;
> - shift--;
> + fact_hi = (u32)(fact >> 32);
> + if (fact_hi) {
> + fs = fls(fact_hi);
> + shift -= fs;
> + fact >>= fs;
> }
>
> return mul_u64_u32_shr(delta_exec, fact, shift);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522b1e30..714af71cf983 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -36,6 +36,7 @@
> #include <uapi/linux/sched/types.h>
>
> #include <linux/binfmts.h>
> +#include <linux/bitops.h>

This hunk of the patch is curious. I assume that bitops.h is needed
for fls(); if so, why not #include it in kernel/sched/fair.c?
Otherwise this potentially hurts compile time for all TUs that include
kernel/sched/sched.h.

> #include <linux/blkdev.h>
> #include <linux/compat.h>
> #include <linux/context_tracking.h>
> --
> 2.30.1.766.gb4fecdf3b7-goog
>


--
Thanks,
~Nick Desaulniers