Re: time warps, I despair

Ulrich Windl (ulrich.windl@rz.uni-regensburg.de)
Mon, 4 Nov 1996 09:30:35 +0100


On 31 Oct 96 at 14:12, j_maurer@informatik.uni-kl.de wrote:

>
> Hi!
>
> In article <13068C15524@rkdvmks1.ngate.uni-regensburg.de>, you write:
> |> So what's the effect? From time to time the clock offset jumps for
> |> some amount always less or equal to what's worth one tick (i.e.
> |> offset jumps around from -5ms to 5ms when synchronized). To thos who
> |> might get an idea when they see the pattern, I put two files on a FTP
> |> server (pcphy4.physik.uni-regensburg.de):
> |>
> |> PPS/sawtooth.ps.gz uses samples every 64 seconds and shows just the
> |> offset
>
> Is there a chance that some part of the kernel blocks interrupts
> with cli() so long that *two* timer interrrupts arrive in the
> meantime, of which only one is saved and the other one discarded?
>
> I am thinking about virtual console switching... I thought I heard
> that it disables interrupts for rather long a time.
>
> Someone did a measurement of duration between cli() and sti()
> once. I think this should be quite easy: On the first cli() of a
> (probably nested) sequence, save the Pentium cycle counter, do the
> difference on a sti(). Probably extraordinary long times for
> disabled interrupts will correspond with your offset data?

Maybe Linus can change the cli/sti macros for 2.1 to produce a syslog
message if interrupts are delayed for a significant time...

>
> Second thought: There's a program to change the interrupt
> priorities on the PC's interrupt controller. This was done to
> give better throughput/less character loss with busy serial lines.
> Probably this will help, at least to tell you what the
> current timer priority is?

I'm using the stock kernel, and thus timer interrupts should be
handled first.

Ulrich

>
> Jens.
>