Re: [RFC 4/4] cputime: remove scaling

From: Stanislaw Gruszka
Date: Thu Apr 11 2013 - 04:36:11 EST

On Wed, Apr 10, 2013 at 02:02:28PM +0200, Ingo Molnar wrote:
> * Stanislaw Gruszka <sgruszka@xxxxxxxxxx> wrote:
> > Scaling cputime cause problems, bunch of them was fixed, but still is possible
> > to hit multiplication overflow issue, which make {u,s}time values incorrect.
> > This problem has no good solution in kernel.
> Wasn't 128-bit math a solution to the overflow problems? 128-bit math isn't nice,
> but at least for multiplication it's defensible.

128 bit division is needed unfortunately. Though on 99.9% of cases, it will go
through 64 bit fast path.

> > This patch remove scaling code and export raw values of {u,t}ime . Procps
> > programs can use newly introduced sum_exec_runtime to find out precisely
> > calculated process cpu time and scale utime, stime values accordingly.
> >
> > Unfortunately times(2) syscall has no such option.
> >
> > This change affect kernels compiled without CONFIG_VIRT_CPU_ACCOUNTING_*.
> So, the concern here is that 'top hiding' code can now hide again. It's also that
> we are not really solving the problem, we are pushing it to user-space - which in
> the best case gets updated to solve the problem in some similar fashion - and in
> the worst case does not get updated or does it in a buggy way.
> So while user-space has it a bit easier because it can do floating point math, is
> there really no workable solution to the current kernel side integer overflow bug?

I do not see any. Basically all we have make problem less reproducible
or just defer it. The best solution, except full 128 bit math I found
is something like this (dropping precision if values are big and overflow
will happen):

u64 _scale_time(u64 rtime, u64 total, u64 time)
const int zero_bits = clzll(time) + clzll(rtime);
u64 scaled;

if (zero_bits < 64) {
/* Drop precision */
const int drop_bits = 64 - zero_bits;

time >>= drop_bits;
rtime >>= drop_bits;
total >>= 2*drop_bits;

if (total == 0)
return time;

scaled = (time * rtime) / total;

return scaled;

It defer problem to quite long period. My testing script detect failure at:

rtime: 1954463459156 <- 22621 days (one thread , CONFIG_HZ=1000)
total: 1771603722423
stime: 354320744484
kernel: 391351504748 <- kernel value
python: 390892691830 <- correct value

For one thread this is fine, but for 512 threads inaccuracy will happen
after only 40 days (due to dropping too many of "total" variable bits).

> I really prefer robust kernel side accounting/instrumentation.

Perhaps we can change to use one of those options by default. I wonder
if the additional performance cost related with them is really something
that we should care about. Are there any measurement that show those
will make performance worse ?

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at