By looking in kernel/sched.c (V 2.0.30) I see that the control of the
rate of adjustment is made by the tick and tickadj variables.
long tick = (1000000 + HZ/2) / HZ; /* timer interrupt period */
int tickadj = 500/HZ; /* microsecs */
Where tick is the time elapsed between interrupts in microseconds and
tickadj is the maximum allowed value of the amount of time that is
added to the clock in addition to tick at every interrupt. This means
that the time is only adjusted at 1/20000 of the rate of elapsed time.
This is not suitable for the purpose that I have in mind which is to
correct the time drift of a Linux box by getting the actual time when
connected to the internet on a dial-up link. Using netdate causes the
time to be set to the correct time and will cause a jump in the actual
time which I find unacceptable. With a permanent connection an ntp
daemon could be used and the amount of change needed would be very
small, and provided that it is less than 1 part in 20000 can be
corrected easily.
I know from experience that on SunOS 4.x the adjtime system call
(actually 'date -a' command), adjusts the time at approximately 1% of
the elapsed time. This would be much more suitable for the task,
correcting by approximately one second per couple of minutes.
The questions I have are why is the amount of change allowed so small?
And what breaks if I change my kernel to make tickadj 100000/HZ and
hence a 1% drift rate?
-- Andrew. ---------------------------------------------------------------------- Andrew M. Bishop amb@gedanken.demon.co.uk http://www.gedanken.demon.co.uk/