Re: tree rcu: call_rcu scalability problem?

From: Paul E. McKenney
Date: Thu Sep 03 2009 - 09:29:06 EST


On Thu, Sep 03, 2009 at 11:01:26AM +0200, Nick Piggin wrote:
> On Wed, Sep 02, 2009 at 10:14:27PM -0700, Paul E. McKenney wrote:
> > >From 0544d2da54bad95556a320e57658e244cb2ae8c6 Mon Sep 17 00:00:00 2001
> > From: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> > Date: Wed, 2 Sep 2009 22:01:50 -0700
> > Subject: [PATCH] Remove grace-period machinery from rcutree __call_rcu()
> >
> > The grace-period machinery in __call_rcu() was a failed attempt to avoid
> > implementing synchronize_rcu_expedited(). But now that this attempt has
> > failed, try removing the machinery.
>
> OK, the workload is parallel processes performing a close(open()) loop
> in a tmpfs filesystem within different cwds (to avoid contention on the
> cwd dentry). The kernel is first patched with my vfs scalability patches,
> so the comparison is with/without Paul's rcu patch.
>
> System is 2s8c opteron, with processes bound to CPUs (first within the
> same socket, then over both sockets as count increases).
>
> procs tput-base tput-rcu
> 1 595238 (x1.00) 645161 (x1.00)
> 2 1041666 (x1.75) 1136363 (x1.76)
> 4 1960784 (x3.29) 2298850 (x3.56)
> 8 3636363 (x6.11) 4545454 (x7.05)
>
> Scalability is improved (from 2-8 way it is now actually linear), and
> single thread performance is significantly improved too.
>
> oprofile results collecting clk unhalted samples shows the following
> results for __call_rcu symbol:
>
> procs samples % app name symbol name
> tput-base
> 1 12153 3.8122 vmlinux __call_rcu
> 2 29253 3.9899 vmlinux __call_rcu
> 4 84503 5.4667 vmlinux __call_rcu
> 8 312816 9.5287 vmlinux __call_rcu
>
> tput-rcu
> 1 8722 2.8770 vmlinux __call_rcu
> 2 17275 2.5804 vmlinux __call_rcu
> 4 33848 2.6015 vmlinux __call_rcu
> 8 67158 2.5561 vmlinux __call_rcu
>
> Scaling is cearly much better (it is more important to look at absolute
> samples because %age is dependent on other parts of the kernel too).
>
> Feel free to add any of this to your changelog if you think it's important.

Very cool!!!

I got a dissenting view from the people trying to get rid of interrupts
in computational workloads. But I believe that it is possible to
split the difference, getting you almost all the performance benefits
while still permitting them to turn off the scheduling-clock interrupt.
The reason that I believe it should get you the performance benefits is
that deleting the rcu_process_gp_end() and check_for_new_grace_period()
didn't do much for you. Their overhead is quite small compared to
hammering the system with a full set of IPIs every ten microseconds
or so. ;-)

So could you please give the following experimental patch a go?
If it works for you, I will put together a production-ready patch
along these lines.

Thanx, Paul

------------------------------------------------------------------------