2010/12/20 Tommaso Cucinotta<tommaso.cucinotta@xxxxxxxx>:1. from a requirements analysis phase, it comes out that it should beI think this make perfect sense, and I have explored related ideas,
possible to specify the individual runtimes for each possible frequency, as
it is well-known that the way computation times scale to CPU frequency is
application-dependent (and platform-dependent); this assumes that as a
developer I can specify the possible configurations of my real-time app,
then the OS will be free to pick the CPU frequency that best suites its
power management logic (i.e., keeping the minimum frequency by which I can
meet all the deadlines).
but for the Linux kernel and
softer realtime use cases I think it is likely too much at least if
this info needs to be passed to the kernel.
But if I was designing a system that needed real hard RT tasks I wouldThis is what has always been done. However, there's an interesting thread
probably not enable cpufreq
when those tasks were active.
I was referring to the possibility to both specify (from within the app) the
4. I would say that, given the tendency to over-provision the runtime (WCET)You mean this on an application level?
for hard real-time contexts, it would not bee too much of a burden for a
hard RT developer to properly over-provision the required budget in presence
of a trivial runtime rescaling policy like in 2.; however, in order to make
everybody happy, it doesn't seem a bad idea to have something like:
4a) use the fine runtimes specified by the user if they are available;
4b) use the trivially rescaled runtimes if the user only specified a single
runtime, of course it should be clear through the API what is the frequency
the user is referring its runtime to, in such case (e.g., maximum one ?)
This is independent on how the budgets for the various CPU speeds are5. Mode Change Protocol: whenever a frequency switch occurs (e.g., dictatedIf we use the trivial rescaling is this a problem?
by the non-RT workload fluctuations), runtimes cannot simply be rescaled
instantaneously: keeping it short, the simplest thing we can do is relying
on the various CBS servers implemented in the scheduler to apply the change
from the next "runtime recharge", i.e., the next period. This creates the
potential problem that the RT tasks have a non-negligible transitory for the
instances crossing the CPU frequency switch, in which they do not have
enough runtime for their work. Now, the general "rule of thumb" is
straightforward: make room first, then "pack", i.e., we need to consider 2
distinct cases:
In myIs it too much of a burden for you to detail how these "accounting" are
implementation the runtime
accounting is correct even when the frequency switch happens during a period.
Also with Peter's suggested implementation the runtime will be correct
as I understand it.
nope. The problem is the one I tried to detail above, and is there both5a) we want to *increase the CPU frequency*; we can immediately increaseDon't you think that this was due to that you did it from user space,
the frequency, then the RT applications will have a temporary
over-provisioning of runtime (still tuned for the slower frequency case),
however as soon as we're sure the CPU frequency switch completed, we can
lower the runtimes to the new values;
I actually change the... same request as above, if possible (detail, please) ...
scheduler's accounting for the rest of the runtime, i.e. can deal with
partial runtimes.