I understand what happens and why. I admit that I'm not familiar with the
POSIX standard on this issue. Questions:
* I've heard that the kernel's timer resolution has increased from 10ms to
1ms in 2.6. Why does the itimer have such a large granularity? I
expected it to be highly accurate in this range.
I think it is. The missing understanding is, I think, that you expect the resolution to be exactly 1/HZ or 1ms. It is actually not exactly that because the PIC can not generate 1ms interrupts (close but not close enough for NTP). So the kernel figures out what the true PIC rate is and sets up the resolution for that. This results in a resolution of ~999,849 nanoseconds (i.e. instead of 1,000,000 nano seconds per tick). Now there is some errors in converting this to micro seconds..., but the actual math is done with more precision with the conversion after (which is why the various times the program tries don't come out being exact multiples of each other, or of anything expressed as only microseconds).
I expect there are at least a few applications that will misbehave because
the developers did not expect a timer to behave this way (regardless of
whether it's proper according to the spec).
Is it possible to choose a timer resolution that errs on the high side of
1ms instead of the low side? [*] It seems to me that would result in the
application getting very close to the expected number of alarm signals. I
am not at all familiar with the kernel design so I don't know if this would
be feasible or not.
[*] If this is the 8254 timer, using 1192 as a divisor should result in a
resolution of ~1,000,686 nanoseconds.