Re: [PATCH v3] rcu: Allow to eliminate softirq processing from rcutree

From: Paul E. McKenney
Date: Fri Mar 22 2019 - 10:31:12 EST


On Thu, Mar 21, 2019 at 04:32:44PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 20, 2019 at 04:46:01PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 20, 2019 at 10:13:33PM +0100, Sebastian Andrzej Siewior wrote:
> > > Running RCU out of softirq is a problem for some workloads that would
> > > like to manage RCU core processing independently of other softirq
> > > work, for example, setting kthread priority. This commit therefore
> > > introduces the `rcunosoftirq' option which moves the RCU core work
> > > from softirq to a per-CPU/per-flavor SCHED_OTHER kthread named rcuc.
> > > The SCHED_OTHER approach avoids the scalability problems that appeared
> > > with the earlier attempt to move RCU core processing to from softirq
> > > to kthreads. That said, kernels built with RCU_BOOST=y will run the
> > > rcuc kthreads at the RCU-boosting priority.
> > >
> > > Reported-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > > Tested-by: Mike Galbraith <efault@xxxxxx>
> > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> >
> > Thank you! I reverted v2 and applied this one with the same sort of
> > update. Testing is going well thus far aside from my failing to add
> > the required "=0" after the rcutree.use_softirq. I will probably not
> > be the only one who will run afoul of this, so I updated the commit log
> > and the documentation accordingly, as shown below.
>
> And I took a look, please see updates/questions interspersed.
>
> I didn't find anything substantive, but still I get hangs. Which is
> the normal situation. ;-)
>
> Will fire off more testing...

And despite my protestations about restrictions involving the scheduler
and rcu_read_unlock(), with the patch below TREE01, TREE02, TREE03, and
TREE09 pass an hour of rcutorture with rcutree.use_softirq=0. Without
this patch, seven-minute runs get hard hangs and this:

[ 18.417315] BUG: spinlock recursion on CPU#5, rcu_torture_rea/763
[ 18.418624] lock: 0xffff9d207eb61940, .magic: dead4ead, .owner: rcu_torture_rea/763, .owner_cpu: 5
[ 18.420418] CPU: 5 PID: 763 Comm: rcu_torture_rea Not tainted 5.1.0-rc1+ #1
[ 18.421786] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
[ 18.423375] Call Trace:
[ 18.423880] <IRQ>
[ 18.424284] dump_stack+0x46/0x5b
[ 18.424953] do_raw_spin_lock+0x8d/0x90
[ 18.425699] try_to_wake_up+0x2cd/0x4f0
[ 18.426493] invoke_rcu_core_kthread+0x63/0x80
[ 18.427337] rcu_read_unlock_special+0x41/0x80
[ 18.428212] __rcu_read_unlock+0x48/0x50
[ 18.428984] cpuacct_charge+0x96/0xd0
[ 18.429725] ? cpuacct_charge+0x2e/0xd0
[ 18.430463] update_curr+0x112/0x240
[ 18.431172] enqueue_task_fair+0xa9/0x1220
[ 18.432009] ttwu_do_activate+0x49/0xa0
[ 18.432741] sched_ttwu_pending+0x75/0xa0
[ 18.433583] scheduler_ipi+0x53/0x150
[ 18.434291] reschedule_interrupt+0xf/0x20
[ 18.435137] </IRQ

I clearly need to audit the setting of ->rcu_read_unlock_special.

Again, the patch below is bad for expedited grace periods, so it is
experimental.

Thanx, Paul

------------------------------------------------------------------------

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index ca972b0b2467..d133fa837426 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -607,12 +607,9 @@ static void rcu_read_unlock_special(struct task_struct *t)
if (preempt_bh_were_disabled || irqs_were_disabled) {
WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, false);
/* Need to defer quiescent state until everything is enabled. */
- if (irqs_were_disabled) {
+ if (irqs_were_disabled && use_softirq) {
/* Enabling irqs does not reschedule, so... */
- if (use_softirq)
- raise_softirq_irqoff(RCU_SOFTIRQ);
- else
- invoke_rcu_core();
+ raise_softirq_irqoff(RCU_SOFTIRQ);
} else {
/* Enabling BH or preempt does reschedule, so... */
set_tsk_need_resched(current);