Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vsunpinnede

From: Peter Zijlstra
Date: Tue Sep 13 2011 - 12:36:47 EST


On Tue, 2011-09-13 at 21:51 +0530, Srivatsa Vaddagiri wrote:
> > I can't read it seems.. I thought you were talking about increasing the
> > period,
>
> Mm ..I brought up the increased lock contention with reference to this
> experimental result that I posted earlier:
>
> > Tuning min_interval and max_interval of various sched_domains to 1
> > and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle
> > time further to 2.7%

Yeah, that's the not being able to read part..

> Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us
> to 500us, which (along with reduction of min/max interval) helped cut down
> idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily
> be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result
> in all cpus contending for cfs_b->lock very frequently).

Right.. so this seems to suggest you're migrating a lot.

Also what workload are we talking about? the insane one with 5 groups of
weight 1024?

Ramping up the frequency of the load-balancer and giving out smaller
slices is really anti-scalability.. I bet a lot of that 'reclaimed' idle
time is spend in system time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/