Re: dynamic-hz

From: Andrea Arcangeli
Date: Mon Dec 13 2004 - 21:39:41 EST


On Mon, Dec 13, 2004 at 08:12:49PM +0100, Pavel Machek wrote:
> Hi!
>
> > > But that does not matter, right? Yes, one-shot timer will not fire
> > > exactly at right place, but as long as you are reading TSC and basing
> > > next shot on current time, error should not accumulate.
> >
> > As said in the rest of the message, the error (or some other error)
> > accumulates heavily today in the tick-loss compensation/adjustment
> > algorithm in arch/i386/kernel/timers/timer_tsc.c, so I'm sceptical
> > about
>
> I do not see how it should accumulate. Lets have working TSC. You want
> to emulate fixed-period timer with single-shot timer.
>
> int should_fire_at;
>
> void handle_single_shot()
> {
> int delay;
> retry:n
> should_fire_at += loops_per_second/HZ
> delay = should_fire_at - get_tsc();
> if (delay < 0)
> goto retry;

Here you get a 10minute long NMI and you're automatically 10 minute in
the past (or your event gets a 10 sec introduced delay) without a way to
track it down.

Now in theory we might run this critical section into some special
section and we could restart it by updating regs->eip before returning
form the nmi. But that still leaves the unfixable window introduced by
the cpu not executing the tsc read and the do_single_shot_in atomically.

Given my recent experience with the lost tick compensation code that has
exactly the same window, I'm not optimistic it's going to keep the system
time uptodate accurately. Perhaps the apic is a lot faster and the error
won't propagate visibly. I'm not against trying but the thing about the
one-shot timer system time accuracy is a lot more complicated than this
pseudocode, and it's not obvious at all that your pseducode will work.

> do_single_shot_in(delay);

The only other way would be to use the 64bit tsc as the only source for
the system time (perhaps that's what you mean with the above
pseudocode?). But the calibration code would need changes to allow that.
Today we use the calibration divisor only in a small range so the
calibration can be quick and this way changing CPU frequency to the cpu
is also easier.

Even before thinking at using the one-shot timer, I would like to
fix the lost tick compensation of current production 2.6.9, only then we
can talk about tickless by using a one-shot timer. If we can't do the
lost-tick compensation without screwing the system time, I don't see how
we can do the one-shot timer without screwing the system time.

The lost tick compensation as well could be avoided if we use the TSC as
the source for gettimeofday and we ignore the PIT completely and we use
the PIT only to wakeup the cpu once in a while. *Then* we could convert
the PIT to a one-shot timer trivially too, but as said above the
accuracy of the divisor isn't enough and I've no idea if we can get a
real calibration that lasts more than a few hours.

My fast_gettimeoffset_quotient is set to 0x2f0271, that means the last
significant bit of the fast_gettimeoffset_quotient will showup in the
gettimeofday last singificant bit, after the tsc counted 2**32 ticks,
that means after less then 4 seconds in my computers at >1ghz. That's
why the gettimeoffset will never return anything longer than 1/HZ, so
the error cannot propagate in userspace.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/