Re: [PATCH v2 -next] sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime
From: Uwe Kleine-König
Date: Tue Jul 30 2024 - 02:55:46 EST
Hello,
On Fri, Jul 26, 2024 at 12:44:29PM +0200, Oleg Nesterov wrote:
> On 07/26, Zheng Zucheng wrote:
> >
> > before call mul_u64_u64_div_u64(),
> > stime = 175136586720000, rtime = 135989749728000, utime = 1416780000.
>
> So stime + utime == 175138003500000
>
> > after call mul_u64_u64_div_u64(),
> > stime = 135989949653530
>
> Hmm. On x86 mul_u64_u64_div_u64(175136586720000, 135989749728000, 175138003500000)
> returns 135989749728000 == rtime, see below.
>
> Nevermind...
>
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -582,6 +582,12 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
> > }
> >
> > stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
> > + /*
> > + * Because mul_u64_u64_div_u64() can approximate on some
> > + * achitectures; enforce the constraint that: a*b/(b+c) <= a.
> > + */
> > + if (unlikely(stime > rtime))
> > + stime = rtime;
>
> Thanks,
>
> Acked-by: Oleg Nesterov <oleg@xxxxxxxxxx>
>
> -------------------------------------------------------------------------------
> But perhaps it makes sense to improve the accuracy of mul_u64_u64_div_u64() ?
Note there is a patch by Nicolas Pitre currently in mm-nonmm-unstable
that makes mul_u64_u64_div_u64() precise. It was in next for a while as
commit 3cc8bf1a81ef ("mul_u64_u64_div_u64: make it precise always")
which might explain problems to reproduce the incorrect results.
An obvious alternative to backporting this change to
kernel/sched/cputime.c for stable is to backport Nicolas's patch
instead. Andrew asked me to provide a justification to send Nicolas's
patch for inclusion in the current devel cycle. So it might make it in
before 6.11.
Best regards
Uwe
Attachment:
signature.asc
Description: PGP signature