Re: [patch v4 0/18] sched: simplified fork, release load avg andpower awareness scheduling

From: Borislav Petkov
Date: Thu Jan 24 2013 - 04:41:36 EST


On Thu, Jan 24, 2013 at 11:06:42AM +0800, Alex Shi wrote:
> Since the runnable info needs 345ms to accumulate, balancing
> doesn't do well for many tasks burst waking. After talking with Mike
> Galbraith, we are agree to just use runnable avg in power friendly
> scheduling and keep current instant load in performance scheduling for
> low latency.
>
> So the biggest change in this version is removing runnable load avg in
> balance and just using runnable data in power balance.
>
> The patchset bases on Linus' tree, includes 3 parts,
> ** 1, bug fix and fork/wake balancing clean up. patch 1~5,
> ----------------------
> the first patch remove one domain level. patch 2~5 simplified fork/wake
> balancing, it can increase 10+% hackbench performance on our 4 sockets
> SNB EP machine.

Ok, I see some benchmarking results here and there in the commit
messages but since this is touching the scheduler, you probably would
need to make sure it doesn't introduce performance regressions vs
mainline with a comprehensive set of benchmarks.

And, AFAICR, mainline does by default the 'performance' scheme by
spreading out tasks to idle cores, so have you tried comparing vanilla
mainline to your patchset in the 'performance' setting so that you can
make sure there are no problems there? And not only hackbench or a
microbenchmark but aim9 (I saw that in a commit message somewhere) and
whatever else multithreaded benchmark you can get your hands on.

Also, you might want to run it on other machines too, not only SNB :-)
And what about ARM, maybe someone there can run your patchset too?

So, it would be cool to see comprehensive results from all those runs
and see what the numbers say.

Thanks.

--
Regards/Gruss,
Boris.

Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/