Re: Clock monotonic a suggestion

From: john stultz (johnstul@us.ibm.com)
Date: Fri Mar 21 2003 - 15:53:27 EST


On Fri, 2003-03-21 at 00:01, george anzinger wrote:
> Joel Becker wrote:
> > If the system is delayed (udelay() or such) by a driver or
> > something for 10 seconds, then you have this (assume gettimeofday is
> > in seconds for simplicity):
> >
> > 1 gettimeofday = 1000000000
> > 2 driver delays 10s
> > 3 gettimeofday = 1000000000
> > 4 timer notices lag and adjusts
>
> Uh, how is this done? At this time there IS correction for delays up
> to about a second built into the gettimeofday() code. You seem to be
> assuming that we can do better than this with clock monotonic. Given
> the right hardware, this may even be possible, but why not correct
> gettimeofday in the same way?

Because to to it properly is slow. Right now gettimeofday is all done
with 32bit math. However this bounds us to ~2 seconds of counting time
before we overflow the low 32bits of the TSC on a 2GHz cpu. Rather then
slowing down gettimeofday with 64bit math to be able to handle the crazy
cases where timer interrupts are not handled for more then 2 seconds, we
propose a new interface (monotonic_clock) that provides increased
corner-case accuracy at increased cost.

thanks
-john



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Mar 23 2003 - 22:00:38 EST