While upgrading the joystick driver to work without turning interrupts off
I noticed that I regularly loose large chunks of time.
At first I used my own direct reading of h/w timer 0, then I turned to
using do_gettimeofday(), with identical results.
I ended writing a small, loadable, dummy driver that, when called,
do back to back calls to the timer and build a histogram of the delays
between consecutive calls. This array is then the result of the 'read'.
The result was that the average response was very fast, about 5 usecs.
do_gettimeofday was a tad slower at 10usecs per call. However there was
a presence of a 3.6ms (millisecs!) gap. This is a P5/90.
This turned out to be the console switching. Each time I switch a virtual
console somewhere I would loose 3.6ms. In other words, even though I am
inside my dummy device driver, doing a 'read', some interrupt takes
away control for 3.6ms. doing this timing with interrupts off will give
very uniform and stable reading, of course.
Now, this is a major problem since I need better response from the joystick
driver (essentially a real time device).
Is there a way to disable this thing?
Should the console be allowed to do this in the first place? I did not
follow it into the console source, but is the console actually doing this
inside an interrupt response to the keyboard request?
I can package the module+program if anyone is interested in observing
these effects, I find it educational to see how responsive the kernel really
is.
-- Regards Eyal Lebedinsky (eyal@ise.canberra.edu.au, eyal@pcug.org.au)