The old GRUB scheduler for Linux was used for some experiments published in a paperThe basic ideas are (warning! This is an over-simplification of the algorithm! :)Has these codes been opened about the implementation in some community or not ?
- You assign runtime and period to each SCHED_DEADLINE task as usual
- Each task is guaranteed to receive its runtime every period
- You can also define a maximum fraction Umax of the CPU time that the
SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or equal
than sum_i runtime_i / period_i
(note: in the original GRUB paper, only one CPU is considered, and Umax is
set equal to 1)
- If the tasks are consuming less than Umax, then the scheduling algorithm
allows them to use more runtime (but not less than the guaranteed runtime_i)
in order to use up to Umax.
This is achieved by modifying the rule used to decrease the runtime: in
SCHED_DEADLINE, if a task executes for a time delta, its runtime is decreased
by delta; using GRUB, it would be decreased by a smaller amount of time
(computed based on Umax, on the active SCHED_DEADLINE tasks, etc...).
This requires to implement some kind of state machine (the details are in
the GRUB paper)
I also had an implementation of the GRUB algorithm (based on a modification
of my old CBS scheduler for Linux), but the computational complexity of the
algorithm was too high. That's why I never proposed to merge it in SCHED_DEADLINE.
But maybe there can be some trade-off between the "exact compliance with the
GRUB algorithm" and implementation efficiency that can make it acceptable...