Re: [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load

From: Peter Zijlstra
Date: Fri Aug 08 2014 - 03:11:17 EST


On Fri, Aug 08, 2014 at 06:30:08AM +0800, Yuyang Du wrote:
> > > > -----------------------------------------------------
> > > > | 10-90 | 100-1000 | 1100-2000
> > > > | users | users | users
> > > > -----------------------------------------------------
> > > > alltests | -3.37% | -10.64% | -2.25%
> > > > -----------------------------------------------------
> > > > all_utime | +0.33% | +3.73% | +3.33%
> > > > -----------------------------------------------------
> > > > compute | -5.97% | +2.34% | +3.22%
> > > > -----------------------------------------------------
> > > > custom | -31.61% | -10.29% | +15.23%
> > > > -----------------------------------------------------
> > > > disk | +24.64% | +28.96% | +21.28%
> > > > -----------------------------------------------------
> > > > fserver | -1.35% | +4.82% | +9.35%
> > > > -----------------------------------------------------
> > > > high_systime | -6.73% | -6.28% | +12.36%
> > > > -----------------------------------------------------
> > > > shared | -28.31% | -19.99% | -7.10%
> > > > -----------------------------------------------------
> > > > short | -44.63% | -37.48% | -33.62%
> > > > -----------------------------------------------------

> Thanks a lot, Jason.
>
> So for this particular set of workloads on a big machine, I think the
> result is mixed and overall "neutral", but I expected the variation probably
> could be bigger especially for light workloads.
>
> Any comment from the maintainers and others? Ping Peter and Ben, I haven't
> heard from you for the 5th version.

Been a bit busy.. but in general I worry about the performance decrease
on the lighter loads. I should probably run some workloads on my 2
socket and 4 socket machines, but the coming few weeks will be very busy
and I'm afraid I'll not get around to it in a timely manner.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/