x86 TSC time warp puzzle

From: Jonathan Lundell
Date: Fri Apr 01 2005 - 20:45:18 EST


Well, not actually a time warp, though it feels like one.

I'm doing some real-time bit-twiddling in a driver, using the TSC to measure out delays on the order of hundreds of nanoseconds. Because I want an upper limit on the delay, I disable interrupts around it.

The logic is something like:

local_irq_save
out(set a bit)
t0 = TSC
wait while (t = (TSC - t0)) < delay_time
out(clear the bit)
local_irq_restore

From time to time, when I exit the delay, t is *much* bigger than delay_time. If delay_time is, say, 300ns, t is usually no more than 325ns. But every so often, t can be 2000, or 10000, or even much higher.

The value of t seems to depend on the CPU involved, The worst case is with an Intel 915GV chipset, where t approaches 500 microseconds (!).

This is with ACPI and HT disabled, to avoid confounding interactions. I suspected NMI, of course, but I monitored the nmi counter, and mostly saw nothing (from time to time a random hit, but mostly not).

The longer delay is real. I can see the bit being set/cleared in the pseudocode above on a scope, and when the long delay happens, the bit is set for a correspondingly long time.

BTW, the symptom is independent of my IO. I wrote a test case that does diddles nothing but reading TSC, and get the same result.

Finally, on some CPUs, at least, the extra delay appears to be periodic. The 500us delay happens about every second. On a different machine (chipset) it happens at about 5 Hz. And the characteristic delay on each type of machine seems consistent.

Any ideas of where to look? Other lists to inquire on?

Thanks.
--
/Jonathan Lundell.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/