[...]The problem the stochastic execution time model tries to address is theI know, and it's very reasonable. The point I'm trying to make is that
WCET computation mess, WCET computation is hard and often overly
pessimistic, resulting in under-utilized systems.
resource reservation tries to address the very same issue.
I am all but against this model, just want to be sure it's not too much
in conflict to the other features we have, especially with resource
reservation. Especially considering that --if I got the whole thing
about this scheduler right-- resource reservation is something we really
want, and I think UNC people would agree here, since I heard Bjorn
stating this very clear both in Dresden and in Dublin. :-)
BTW, I'm adding them to the Cc, seems fair, and more useful than all
this speculation! :-P
Bjorn, Jim, sorry for bothering. If you're interested, this is the very
beginning of the whole thread:
http://lkml.org/lkml/2010/10/29/67
If you're talking about our most recent "stochastic" paper, it is aboutSo, if I understand well (sorry, I am just trying to make a short summary to check if we are aligned) your analysis is similar to the one presented in the papers I mentioned earlier in this thread (different stochastic modelling, but similar approach): you analyse a reservation in isolation and you provide some stochastic tardiness guarantees based on an (e_i, p_i) service model.... Right?
supporting
soft real-time task systems on a multiprocessor where resource
reservations are
used. The main result of the paper is that if you provision the
reservation for a
task slightly higher than it's average-case execution time, and if you
use a
scheduling algorithm (like global EDF) that ensures bounded tardiness
(w.r.t.
these reservations), then the task's expected tardiness will be bounded
and the
expected bound does not depend on worst-case execution times. I'm not
sure if
slack-reallocation methods have come up in this discussion (sorry, I'm
really
pressed for time and didn't look), but we didn't get into that in our
paper.