Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinned
From: Paul Turner
Date: Tue Jun 21 2011 - 15:49:33 EST
Can you see what things look like under v7?
There's been a few improvements to quota re-distribution that should
hopefully help your test case.
The remaining idle% I see on my machines appear to be a product of
On Tue, Jun 14, 2011 at 10:37 PM, Kamalesh Babulal
> * Paul Turner <pjt@xxxxxxxxxx> [2011-06-13 17:00:08]:
>> Hi Kamalesh.
>> I tried on both friday and again today to reproduce your results
>> without success. Results are attached below. The margin of error is
>> the same as the previous (2-level deep case), ~4%. One minor nit, in
>> your script's input parsing you're calling shift; you don't need to do
>> this with getopts and it will actually lead to arguments being
>> Are you testing on top of a clean -tip? Do you have any custom
>> load-balancer or scheduler settings?
>> - Paul
>> Hyper-threaded topology:
>> Average CPU Idle percentage 38.6333%
>> Bandwidth shared with remaining non-Idle 61.3667%
>> Average CPU Idle percentage 35.2766%
>> Bandwidth shared with remaining non-Idle 64.7234%
>> (The mask in the "unpinned" case is 0-3,6-9,12-15,18-21 which should
>> mirror your 2 socket 8x2 configuration.)
>> 4-way NUMA topology:
>> Average CPU Idle percentage 5.26667%
>> Bandwidth shared with remaining non-Idle 94.73333%
>> Average CPU Idle percentage 0.242424%
>> Bandwidth shared with remaining non-Idle 99.757576%
> Hi Paul,
> I tried tip 919c9baa9 + V6 patchset on 2 socket,quadcore with HT and
> the Idle time seen is ~22% to ~23%. Kernel is not tuned to any custom
> load-balancer/scheduler settings.
> Average CPU Idle percentage 23.5333%
> Bandwidth shared with remaining non-Idle 76.4667%
> Average CPU Idle percentage 0%
> Bandwidth shared with remaining non-Idle 100%
>> On Fri, Jun 10, 2011 at 11:17 AM, Kamalesh Babulal
>> <kamalesh@xxxxxxxxxxxxxxxxxx> wrote:
>> > * Paul Turner <pjt@xxxxxxxxxx> [2011-06-08 20:25:00]:
>> >> Hi Kamalesh,
>> >> I'm unable to reproduce the results you describe. One possibility is
>> >> load-balancer interaction -- can you describe the topology of the
>> >> platform you are running this on?
>> >> On both a straight NUMA topology and a hyper-threaded platform I
>> >> observe a ~4% delta between the pinned and un-pinned cases.
>> >> Thanks -- results below,
>> >> - Paul
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/