Where I come from, "real-time" doesn't mean "minimum latency", it
means "guaranteed response time". True real-time scheduling is
harder than just reducing scheduling latency in the cases where
multiple real-time tasks exist on the same machine or when a
mixture of real-time and non-real-time tasks exist. For example,
the guaranteeable minimum latency depends a great deal on the
maximum interrupt processing time. If multiple real-time tasks
exist, then the possibility exists that one can break the other
when events come in for both and one task longer to respond and
yield than the other task's required response time.
Your notion of "RT performance" as being necessarily improved by
reducing scheduler latency is much too simplistic. As long as
your process is scheduled within the necessary amount of time
after an event, and gets to run long enough to perform the
processing needed for that event, then it doesn't so much matter
what the exact scheduling latency is, as long as it's always less
than that limit.
If you have a real-time application where response times of
microseconds are required and these scheduling latencies that you
(but not other people) are seeing are significant, you probably
can't run it on normal hardware, let alone as a user process in
Linux. On a general timesharing system, can you guarantee that a
hardware operation requiring interrupts or DMA initiated by a
low-priority process won't stall your real-time task? At best
you'd have to hook the real-time task up to a hardware interrupt
of high priority in order to make that level of guaranteed
response time. Otherwise I can't see how you can get that kind of
real-time performance in any kind of timesharing environment no
matter how you hack the OS.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/