Re: [PATCH tip/core/rcu 02/23] rcu: Allow RCU grace-periodinitialization to be preempted
From: Paul E. McKenney
Date: Tue Sep 04 2012 - 21:22:38 EST
On Sat, Sep 01, 2012 at 06:09:35PM -0700, Josh Triplett wrote:
> On Thu, Aug 30, 2012 at 11:18:17AM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
> >
> > RCU grace-period initialization is currently carried out with interrupts
> > disabled, which can result in 200-microsecond latency spikes on systems
> > on which RCU has been configured for 4096 CPUs. This patch therefore
> > makes the RCU grace-period initialization be preemptible, which should
> > eliminate those latency spikes. Similar spikes from grace-period cleanup
> > and the forcing of quiescent states will be dealt with similarly by later
> > patches.
> >
> > Reported-by: Mike Galbraith <mgalbraith@xxxxxxx>
> > Reported-by: Dimitri Sivanich <sivanich@xxxxxxx>
> > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
>
> Does it make sense to have cond_resched() right before the continues,
> which lead right back up to the wait_event_interruptible at the top of
> the loop? Or do you expect to usually find that event already
> signalled?
Given that the event might have already signaled, I need to put the
cond_resched() into the loop. Otherwise, we get back long latencies.
Thanx, Paul
> In any case:
>
> Reviewed-by: Josh Triplett <josh@xxxxxxxxxxxxxxxx>
>
> > kernel/rcutree.c | 17 ++++++++++-------
> > 1 files changed, 10 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index e1c5868..ef56aa3 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -1069,6 +1069,7 @@ static int rcu_gp_kthread(void *arg)
> > * don't start another one.
> > */
> > raw_spin_unlock_irqrestore(&rnp->lock, flags);
> > + cond_resched();
> > continue;
> > }
> >
> > @@ -1079,6 +1080,7 @@ static int rcu_gp_kthread(void *arg)
> > */
> > rsp->fqs_need_gp = 1;
> > raw_spin_unlock_irqrestore(&rnp->lock, flags);
> > + cond_resched();
> > continue;
> > }
> >
> > @@ -1089,10 +1091,10 @@ static int rcu_gp_kthread(void *arg)
> > rsp->fqs_state = RCU_GP_INIT; /* Stop force_quiescent_state. */
> > rsp->jiffies_force_qs = jiffies + RCU_JIFFIES_TILL_FORCE_QS;
> > record_gp_stall_check_time(rsp);
> > - raw_spin_unlock(&rnp->lock); /* leave irqs disabled. */
> > + raw_spin_unlock_irqrestore(&rnp->lock, flags);
> >
> > /* Exclude any concurrent CPU-hotplug operations. */
> > - raw_spin_lock(&rsp->onofflock); /* irqs already disabled. */
> > + get_online_cpus();
> >
> > /*
> > * Set the quiescent-state-needed bits in all the rcu_node
> > @@ -1112,7 +1114,7 @@ static int rcu_gp_kthread(void *arg)
> > * due to the fact that we have irqs disabled.
> > */
> > rcu_for_each_node_breadth_first(rsp, rnp) {
> > - raw_spin_lock(&rnp->lock); /* irqs already disabled. */
> > + raw_spin_lock_irqsave(&rnp->lock, flags);
> > rcu_preempt_check_blocked_tasks(rnp);
> > rnp->qsmask = rnp->qsmaskinit;
> > rnp->gpnum = rsp->gpnum;
> > @@ -1123,15 +1125,16 @@ static int rcu_gp_kthread(void *arg)
> > trace_rcu_grace_period_init(rsp->name, rnp->gpnum,
> > rnp->level, rnp->grplo,
> > rnp->grphi, rnp->qsmask);
> > - raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
> > + raw_spin_unlock_irqrestore(&rnp->lock, flags);
> > + cond_resched();
> > }
> >
> > rnp = rcu_get_root(rsp);
> > - raw_spin_lock(&rnp->lock); /* irqs already disabled. */
> > + raw_spin_lock_irqsave(&rnp->lock, flags);
> > /* force_quiescent_state() now OK. */
> > rsp->fqs_state = RCU_SIGNAL_INIT;
> > - raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
> > - raw_spin_unlock_irqrestore(&rsp->onofflock, flags);
> > + raw_spin_unlock_irqrestore(&rnp->lock, flags);
> > + put_online_cpus();
> > }
> > return 0;
> > }
> > --
> > 1.7.8
> >
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/