Andy Lutomirski <luto@xxxxxxxxxx> writes:
On Thu, Jan 16, 2020 at 11:57 AM Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
Andy Lutomirski <luto@xxxxxxxxxx> writes:
On Thu, Jan 16, 2020 at 9:58 AM Christophe Leroy
Would mul_u64_u64_shr() be a good alternative? Could we adjust it to
assume the shift is less than 32? That function exists to benefit
32-bit arches.
We'd want mul_u64_u32_shr() for this. The rules for mult and shift are:
That's what I meant to type...
Just that it does not work. The math is:
ns = d->nsecs; // That's the nsec value shifted left by d->shift
ns += ((cur - d->last) & d->mask) * mult;
ns >>= d->shift;
So we cannot use mul_u64_u32_shr() because we need the addition there
before shifting. And no, we can't drop the fractional part of
d->nsecs. Been there, done that, got sporadic time going backwards
problems as a reward. Need to look at that again as stuff has changed
over time.
On x86 we enforce that mask is 64bit, so the & operation is not there,
but due to the nasties of TSC we have that conditional
if (cur > last)
return (cur - last) * mult;
return 0;
Christophe, on PPC the decrementer/RTC clocksource masks are 64bit as
well, so you can spare that & operation there too.