Re: [PATCH] sched/deadline: Add sched_dl documentation
From: Peter Zijlstra
Date: Mon Jan 20 2014 - 06:47:06 EST
On Mon, Jan 20, 2014 at 12:24:42PM +0100, Henrik Austad wrote:
> > +2. Task scheduling
> > +==================
> > +
> > + The typical -deadline task is composed of a computation phase (instance)
> > + which is activated on a periodic or sporadic fashion. The expected (maximum)
> > + duration of such computation is called the task's runtime; the time interval
> > + by which each instance needs to be completed is called the task's relative
> > + deadline. The task's absolute deadline is dynamically calculated as the
> > + time instant a task (or, more properly) activates plus the relative
> > + deadline.
> activates - released?
> Since real-time papers from different rt-campus around the academia insist
> on using *slightly* different terminology, perhaps add a short dictionary
> for some of the more common terms?
Oh gawd, they really don't conform with their definitions? I'd not
> D: relative deadline, typically N ms after release
You failed to define release :-) Its the 'wakeup' event, right? Where
the activation would be the moment we actually schedule the
> d: absolute deadline, the physical time when a given instance of a job
> needs to be completed
> R: relative release time, for periodic tasks, this is typically 'every N
> r: absolute release time
> C: Worst-case execution time
> ...you get the idea.
> Perhaps too academic?
I think not, one can never be too clear about these things.
> > +4. Tasks CPU affinity
> > +=====================
> > +
> > + -deadline tasks cannot have an affinity mask smaller that the entire
> > + root_domain they are created on. However, affinities can be specified
> > + through the cpuset facility (Documentation/cgroups/cpusets.txt).
> Does this mean that sched_deadline is a somewhat global implementation?
Yes, its a GEDF like thing.
> rather, at what point in time will sched_deadline take all cpus in a set
> into consideration and when will it only look at the current CPU? Where is
> the line drawn between global and fully partitioned?
Its drawn >< that close to global.
So I think adding a SCHED_FLAG_DL_HARD option which would reduce to
strict per-cpu affinity and deliver 0 tardiness is a future work.
Its slightly complicated in that you cannot share the DL tree between
the GEDF and EDF jobs, because while a GEDF job might have an earlier
deadline an EDF job might have a lesser laxity. Not running the EDF job
in that case would result in a deadline miss (although, assuming we'd
still have function GEDF admission control, still have bounded
I'm not entirely sure we want to do anything in between the fully global
and per-cpu 'hard' mode -- random affinity masks seems like a terribly
NOTE: the 'global' nature is per root_domain, so cpusets can be used to
carve the thing into smaller balance sets.
> Also, how do you account the budget when a resource holder is boosted in
> order to release a resource? (IIRC, you use BWI, right?)
Boosting is still work in progress, but yes, it does a BWI like thing.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/