Re: [PATCH] rcu: update: make RCU_EXPEDITE_BOOT default

From: Paul E. McKenney
Date: Mon Nov 07 2016 - 14:04:32 EST


On Mon, Nov 07, 2016 at 10:08:32AM -0800, Josh Triplett wrote:
> On Mon, Nov 07, 2016 at 10:05:13AM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 07, 2016 at 09:35:46AM -0800, Josh Triplett wrote:
> > > On Mon, Nov 07, 2016 at 06:30:30PM +0100, Sebastian Andrzej Siewior wrote:
> > > > On 2016-11-07 12:19:39 [-0500], Steven Rostedt wrote:
> > > > > I agree, but if this creates a boot time regression in large machines,
> > > > > it may not be warranted.
> > > > >
> > > > > I know Linus usually doesn't like options with default y, but this may
> > > > > be one of those exceptions. Perhaps we should make it on by default and
> > > > > say in the config "if you have a machine with 100s or 1000s of CPUs,
> > > > > you may want to disable this".
> > > >
> > > > The default could change if we know where the limit is. I have access to
> > > > a box with approx 140 CPUs so I could check there if it is already bad.
> > > > But everything above that / in the 1000 range is a different story.
> > >
> > > Right; if we can characterize what machines it benefits and what
> > > machines it hurts, we can automatically detect and run the appropriate
> > > case with no configuration option needed.
> >
> > I very much like this approach! Anyone have access to large systems on
> > which this experiment could be carried out? In the absence of new data,
> > I would just set the cutoff at 256 CPUs, as I have done in the past.
>
> One potential issue here: the point where RCU_EXPEDITE_BOOT pessimizes
> likely depends on interconnect as much as CPUs. I'd guess that you may
> want to set the cutoff based on number of NUMA nodes, rather than number
> of CPUs.

Longer term, the solution would be a function that defined the cutoff
point. If an architecture didn't define the function, they get the
default cutoff of 256. If they do define the function, they get whatever
they coded.

Thanx, Paul