Re: [PATCH] i386: Selectable Frequency of the Timer Interrupt

From: Bill Davidsen
Date: Wed Jul 13 2005 - 11:25:36 EST


Vojtech Pavlik wrote:
On Tue, Jul 12, 2005 at 08:57:06AM -0700, George Anzinger wrote:


The PIT crystal runs at 14.3181818 MHz (CGA dotclock, found on ISA, ...)
and is divided by 12 to get PIT tick rate

14.3181818 MHz / 12 = 1193182 Hz

Yes, but the current code uses 1193180. Wonder why that is...


Because it was stated so in some ancient DOS docs? I really don't know.
x86-64 uses 1193182. This is because when I was doing x86-64 timing
code, I had observably better results with this number.


HZ ticks/jiffie 1 second error (ppm)
---------------------------------------------------
100 11932 1.000015238 15.2
200 5966 1.000015238 15.2
250 4773 1.000057143 57.1
300 3977 0.999931429 -68.6
333 3583 0.999964114 -35.9
500 2386 0.999847619 -152.4
1000 1193 0.999847619 -152.4

If we are following the standard and trying to set up a timer, the 1
second time MUST be >= 1 second. Thus the values for 300 and above in
this table don't fly.


In that case, the table will look like this (with the CGA clock fixed to
14.31818000 MHz):

HZ ticks/jiffie 1 second error (ppm)
---------------------------------------------------
100 11932 1.000015365 15.4
200 5966 1.000015365 15.4
250 4773 1.000057270 57.3
300 3978 1.000182984 183.0
500 2387 1.000266794 266.8
1000 1194 1.000685841 685.8
---------------------------------------------------
82 14551 1.000000279 0.3
216 5524 1.000001956 2.0
432 2762 1.000001956 2.0
864 1381 1.000001956 2.0
1381 864 1.000001956 2.0



If we are trying to keep system time, we'll we do just fine at that by using the actual value of the jiffie (NOT 1/HZ) when we update time (one of the reasons for going to nanoseconds in xtime).


Yes, adding (12.0/14318180)*LATCH to xtime in each timer interrupt,
instead of (1.0/HZ) indeed eliminates the error entirely.


The observable thing the user sees is best seen by setting up an
itimer to repeat every second. Then you will see the drift AND it
will be against the system clock which itself is quite accurate (the
50-100ppm you mention), even without ntp. And the error really is in
the range of 848ppm for HZ=1000 BECAUSE we need to follow the
standard. You can easily see this with the current 2.6 kernel. We
even have a bug report on it:

http://bugzilla.kernel.org/show_bug.cgi?id=3289

Going to HZ=864 would fix this problem. It would likely cause other
problems in places that expect 1/HZ to be a sane number, though.

But if you are going to an "odd" value, would 1381 would be a better choice, given the interest in minimizing latency currently evident? The overhead would be a little higher, of course, but the people who want to drop to 250 for battery life would probably want 216 anyway.

How serious is the 1/HZ = sane problem, and more to the point how many programs get the HZ value with a system call as opposed to including a header or building it in? I know some of my older programs use header files, that was part of the planning for the future even before 2.5 started. At the time I didn't expect to have to use the system call.

--
-bill davidsen (davidsen@xxxxxxx)
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/