On Fri, 2010-11-12 at 19:07 +0100, Tommaso Cucinotta wrote:My fault for not having explained. Let me see if I can clarify. Let's just consider the simple case in which application instances do not enqueue (i.e., as soon as the application detects to have missed a deadline, it discards the current job, as opposed to keep computing the current job), and consider a reservation period == application period.-) the specification of a budget every period may be exploited forMaybe, but I'm clearly not one of them because I'm not getting it.
providing deterministic guarantees to applications, if the budget =
WCET, as well as probabilistic guarantees, if the budget< WCET. For
example, what we do in many of our papers is to set budget = to some
percentile/quantile of the observed computation time distribution,
especially in those cases in which there are isolated peaks of
computation times which would cause an excessive under-utilization of
the system (these are ruled out by the percentile-based allocation); I
think this is a way of reasoning that can be easily understood and used
by developers;
Here I was not referring to GEDF, but simply to the case in which we are reserved from the kernel a budget every period (whatever the scheduling algorithm): as the reserved budget moves from the WCET down towards the average computation time, the response time distribution moves from a shape entirely contained below the deadline, to a more and more flat shape, where the probability of missing the deadline for the task increases over and over. Roughly speaking, if the application instances do not enqueue, then with a budget = average computation time, I would expect a ~50% deadline miss, something which hardly is acceptable even for soft RT applications.-) setting a budget equal to (or too close to) the average computationHow so? Didn't the paper referenced just prove that the response time
time is *bad*, because the is almost in a meta-stable condition in which
its response-time may easily grow uncontrolled;
stays bounded?
Setting it lower will of course wreak havoc, but that's what we haveI'd need some more explanation, sorry, I couldn't understand what you're proposing.
bandwidth control for (implementing stochastic bandwidth control is a
whole separate fun topic though -- although I've been thinking we could
do something by lowering the max runtime every time a job overruns the
average, and limit it at 2*avg - max, if you take a simple parametrized
reduction function and compute the variability of th resulting series
you can invert that and find the reduction parameter to a given
variability).
I was assuming you were proposing to keep an admission test based on providing the parameters needed for checking whether or not a given tardiness bound were respected. I must have misunderstood. Would you please detail what is the test (and result in the paper) you are thinking of using ?-) if you want to apply the Mills& Anderson's rule for controlling theRight, but do we ever actually want to compute the bound? G-EDF also
bound on the tardiness percentiles, as in that paper (A Stochastic
Framework for Multiprocessor
Soft Real-Time Scheduling), then I can see 2 major drawbacks:
a) you need to compute the "\psi" in order to use the "Corollary 10"
of that paper, but that quantity needs to solve a LP optimization
problem (see also the example in Section 6); the \psi can be used in Eq.
(36) in order to compute the *expected tardiness*;
incurs tardiness but we don't calculate it either.
Sure, I agree. I was simply suggesting it as a last-resort option (possibly usable by exploiting a compile-time option of the scheduler compiling out the admission test), useful in those cases in which you do have a user-space complex admission test that you made (or even an off-line static analysis of your system) but the simple admission test into the kernel would actually reject the task set, being the test merely sufficient.If you really want, youI'm very much against disabling everything and letting the user sort it,
can disable *any* type of admission control at the kernel-level, and you
can disable *any* kind of budget enforcement, and just trust the
user-space to have deployed the proper/correct number& type of tasks
into your embedded RT platform.
that's basically what SCHED_FIFO does too and its a frigging nightmare.