Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vsunpinned

From: Kamalesh Babulal
Date: Wed Jun 15 2011 - 01:37:42 EST


* Paul Turner <pjt@xxxxxxxxxx> [2011-06-13 17:00:08]:

> Hi Kamalesh.
>
> I tried on both friday and again today to reproduce your results
> without success. Results are attached below. The margin of error is
> the same as the previous (2-level deep case), ~4%. One minor nit, in
> your script's input parsing you're calling shift; you don't need to do
> this with getopts and it will actually lead to arguments being
> dropped.
>
> Are you testing on top of a clean -tip? Do you have any custom
> load-balancer or scheduler settings?
>
> Thanks,
>
> - Paul
>
>
> Hyper-threaded topology:
> unpinned:
> Average CPU Idle percentage 38.6333%
> Bandwidth shared with remaining non-Idle 61.3667%
>
> pinned:
> Average CPU Idle percentage 35.2766%
> Bandwidth shared with remaining non-Idle 64.7234%
> (The mask in the "unpinned" case is 0-3,6-9,12-15,18-21 which should
> mirror your 2 socket 8x2 configuration.)
>
> 4-way NUMA topology:
> unpinned:
> Average CPU Idle percentage 5.26667%
> Bandwidth shared with remaining non-Idle 94.73333%
>
> pinned:
> Average CPU Idle percentage 0.242424%
> Bandwidth shared with remaining non-Idle 99.757576%
>
Hi Paul,

I tried tip 919c9baa9 + V6 patchset on 2 socket,quadcore with HT and
the Idle time seen is ~22% to ~23%. Kernel is not tuned to any custom
load-balancer/scheduler settings.

unpinned:
Average CPU Idle percentage 23.5333%
Bandwidth shared with remaining non-Idle 76.4667%

pinned:
Average CPU Idle percentage 0%
Bandwidth shared with remaining non-Idle 100%

Thanks,

Kamalesh
>
>
>
> On Fri, Jun 10, 2011 at 11:17 AM, Kamalesh Babulal
> <kamalesh@xxxxxxxxxxxxxxxxxx> wrote:
> > * Paul Turner <pjt@xxxxxxxxxx> [2011-06-08 20:25:00]:
> >
> >> Hi Kamalesh,
> >>
> >> I'm unable to reproduce the results you describe.  One possibility is
> >> load-balancer interaction -- can you describe the topology of the
> >> platform you are running this on?
> >>
> >> On both a straight NUMA topology and a hyper-threaded platform I
> >> observe a ~4% delta between the pinned and un-pinned cases.
> >>
> >> Thanks -- results below,
> >>
> >> - Paul
> >>
> >>
(snip)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/