Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vsunpinnede

From: Srivatsa Vaddagiri
Date: Mon Sep 12 2011 - 06:18:10 EST


* Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> [2011-09-09 14:31:02]:

> > Machine : 16-cpus (2 Quad-core w/ HT enabled)
> > Cgroups : 5 in number (C1-C5), each having {2, 2, 4, 8, 16} tasks respectively.
> > Further, each task is placed in its own (sub-)cgroup with
> > a capped usage of 50% CPU.
>
> So that's loads: {512,512}, {512,512}, {256,256,256,256}, {128,..} and {64,..}

Yes, with the default shares of 1024 for each cgroup.

FWIW we did also try setting shares for each cgroup proportional to number of
tasks it has. For ex: C1's shares = 1024 * 2 = 2048, C2 = 1024 * 2 = 2048,
C3 = 4 * 1024 = 4096 etc. while /C1/C1_1, /C1/C1_2, .../C5/C5_16/ shares were
left at default of 1024 (as those sub-cgroups contain only one task).

That does help reduce idle time by almost 50% (from 15-20% -> 6-9%)

- vatsa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/