Another way of looking at the distinction: missing a deadline in a Hard RT
system is an 'unrecoverable' situation. (Whether 'unrecoverable' means
'destruction of the earth' or 'no future deadlines can be met because of
cascading effects' is a "scale" issue, i.e. the 'importance' of the system
within its operational context.) Missing a deadline in a Firm RT system
is a 'recoverable failure' (and has the beneficial property that
uncompleted work can be discarded). Non-mission-critical data collection
and control systems like the Win-printer example have this property.
(If, however, you wish to nit-pik: if the system's goal is "to always
correctly drive Win-printers and not waste paper and ink by screwing up a
few pages", then the win-printer example IS an example of a hard RT task,
i.e., we have flatly failed in our goal as soon as a deadline is missed.
But most of us aren't that anal about paper and toner :-) ...)
> As far as the kernel goes, that distinction is totally useless.
> (gee, it is OK to send signal 9 to a failed hard real-time task)
It's never Ok to send a signal 9 to a failed Hard real-time task, since
it's never Ok to _HAVE_ a failed Hard real-time task in a truly Hard RT
system. In a Hard RT system, 100% of the admitted tasks must meet 100% of
their deadlines. That is invariant. All other metrics are secondary.
This is the DEFINITION of a Hard RT task in a Hard RT system. And (in
case anyone has any doubts) mainstream Linux is NOT a Hard real-time
system. Sending a signal 9 to a failed Firm real-time task, however, is
perfectly reasonable; "scrap your current work and start over on new
data.", and sending it to a failed sort real-time task is generally a bad
idea (since the work must be completed eventually anyway).
> > These definitions imply NOTHING about the SCALE of the values,
> > which is the problem you address by your preference; they only
> > describe the SHAPE of the value function in time.
>
> Fine, but then don't ask Mr. Gooch if he has hard or soft real-time
> requirements. Instead ask him how much money (sleep, lost face...)
> he would lose if Linux ever failed and compare that to how much money
> (hacking time, another OS) he would pay to avoid the problem. This not
> being a perfect world, there is no guarantee. Reliability has a cost,
> often including extra hardware to allow for stuff that breaks.
>
> The scale matters. The scale determines what level of reliability you
> are willing to pay for. The reliability you demand is what matters
> as far as the kernel goes. Being "soft" only makes it more difficult
> to evaluate the cost tradeoffs.
I never said the scale doesn't matter. But Hard/Soft/Firm represent
fundamentally different scheduling models, in which different things are
held invariant and different things can be optimized. It sounds like
Richard's data-collection process _should_ be a Hard RT process, but given
Linux can't guarantee that, he would like an _approximation_ thereof by
minimizing average interrupt-to-RT-process-wakeup latency (assuming the
interrupt is periodic, reducing the mean latency of commencing an
unpreemptible task increases the probability that the task will complete
prior to its deadline, which is presumably the next interrupt).
Adam
-- You crucify all honesty \\Adam D. Bradley artdodge@cs.bu.edu No signs you see do you believe \\Boston University Computer Science And all your words just twist and turn\\ Grad Student and Linux Hacker Reviving just to crash and burn \\ <>< ---------> Why can't you listen as love screams everywhere? <--------
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/