On Monday 18 August 2008 21:51, Peter Zijlstra wrote:On Mon, 2008-08-18 at 21:24 +1000, Nick Piggin wrote:Really, you think the enterprise distros will willingly break POSIXI'm not seeing why you're making such a big fuss over this - IMO its not
and their own backwards compatiblity by default? I wouldn't have
thought so, but anyway I guess they are free to make that choice, so
where's the problem?
such a significant breakage. Esp since very few realtime apps will
require such large amounts of time to ever run into the throttle.
If their usage is 95%+ cpu they must have magic WCET estamates - or like
in this case, be a benchmark app which IMHO just abuses the real-time
class.
Note that this certainly does not have to be the case. It is perfectly
valid to dynamically scale the work performed according to the amount
of CPU time available but still be sensitive to latency.
video decoding would be a really simple example. But you can't just
"know" how all RT apps are coded and think this is no problem.
It's like running your real-time code on a 5% slower cpu - if it runs
correctly on the 5% slower cpu, it will run correctly here too.
Aside from the latency issue which makes this statement incorrect...
If the code does not run correctly on a 5% slower CPU, it will break.
How is that OK?
You might expect many systems would include at least a 5% margin of
error, but if the kernel takes 5%, then that's 5% of the safety
margin gone, so while the app might "work", it might no longer
meet requirements.
Note that correctness from a RT pov is making your deadline.
Correctness from the kernel's POV is implementing APIs as advertised,
and just as importantly, not changing them. We can argue about how RT
apps work, but there is no argument that the kernel has broken
backwards compatibility and standards.