Re: cgroup, balance RT bandwidth
From: Rolando Martins
Date: Tue Mar 10 2009 - 11:04:13 EST
On Tue, Mar 10, 2009 at 2:26 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Tue, 2009-03-10 at 11:49 +0000, Rolando Martins wrote:
>> Just to confirm, cpuset.sched_load_balance doesn't work with RT, right?
>
> It should. It should split the RT balance domain just the same.
>
>> You cannot have tasks for sub-domain 2 to utilize bandwidth of
>> sub-domain 3, right?
>
> If you disabled load-balancing on your root domain (1 below) then
> indeed, tasks from 2 will not be able to consume bandwidth from tasks in
> 3.
>
> The available bandwidth is related to the number of cpus in the balance
> domain.
cgroup
echo 1 > cpuset.sched_load_balance
cgroup/2
echo 0 > cpuset.mems
echo 0-2 > cpuset.cpus
echo 450000 > cpu.rt_runtime_us
cgroup/3
echo 0 > cpuset.mems
echo 3 > cpuset.cpus
echo 450000 > cpu.rt_runtime_us
I have a small test that uses a loop to utilize 100% cpu (SCHED_FIFO).
When I run 2 tests on cgroup/3, it only uses bandwidth from cpu 3
(100%), the balancing isn't happening.
As I use the SCHED_FIFO, the 2 processes run sequentially.
Can you check this? Maybe I am doing something wrong...
>
>>
>> __1__
>> / \
>> 2 3
>> (50% rt) (50% rt )
>>
>> For my application domain it would be interesting to have
>> rt_runtime_ns as a min. of allocated rt and not a max.
>
>> Ex. If an application of domain 2 needs to go up to 100% and domain 3
>> is idle, then it would be cool to let it utilize the full bandwidth.
>
>> (we also could have a hard upper limit in each sub-domain, like
>> hard_up=0.8, i.e. even if we could get 100%, we will only utilize
>> 80%); in other words, rt having the same cpu bandwidth management behavior
>> as the "best-effort" tasks.
>>
>> Could this be done?
>
> Possibly, but since RT scheduling is all about determinism, I see no use
> in adding something best-effort -- that simply defeats the purpose.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/