Re: [PATCH v2] rcutorture: Avoid problematic critical section nesting on PREEMPT_RT

From: Paul E. McKenney
Date: Fri Aug 20 2021 - 18:10:59 EST


On Fri, Aug 20, 2021 at 09:42:36AM +0200, Sebastian Andrzej Siewior wrote:
> From: "From: Scott Wood" <swood@xxxxxxxxxx>
>
> rcutorture is generating some nesting scenarios that are not compatible on PREEMPT_RT.
> For example:
> preempt_disable();
> rcu_read_lock_bh();
> preempt_enable();
> rcu_read_unlock_bh();
>
> The problem here is that on PREEMPT_RT the bottom halves have to be
> disabled and enabled in preemptible context.
>
> Reorder locking: start with BH locking and continue with then with
> disabling preemption or interrupts. In the unlocking do it reverse by
> first enabling interrupts and preemption and BH at the very end.
> Ensure that on PREEMPT_RT BH locking remains unchanged if in
> non-preemptible context.
>
> Link: https://lkml.kernel.org/r/20190911165729.11178-6-swood@xxxxxxxxxx
> Link: https://lkml.kernel.org/r/20210819182035.GF4126399@paulmck-ThinkPad-P17-Gen-1
> Signed-off-by: Scott Wood <swood@xxxxxxxxxx>
> [bigeasy: Drop ATOM_BH, make it only about changing BH in atomic
> context. Allow enabling RCU in IRQ-off section. Reword commit message.]
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

Looks plausible. ;-)

I have queued this for testing and further review. If all goes well,
perhaps the v5.16 merge window.

Thanx, Paul

> ---
> v1…v2:
> - Drop the ATOM_BH* bits. There don't seem to be needed, Paul did not
> ant the preempt-disable around enabling/disabling BH as it might fix
> things that RCU should take care.
>
> - Allow enabling RCU with disabled interrupts on RT. Scott confirmed
> that it was needed but might no longer be needed. Paul said that it
> might have been required at some point. It survived multiple 6h long
> TREE01 and TREE06 testing.
>
> kernel/rcu/rcutorture.c | 48 ++++++++++++++++++++++++++++++-----------
> 1 file changed, 36 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> index 40ef5417d9545..d2ef535530b10 100644
> --- a/kernel/rcu/rcutorture.c
> +++ b/kernel/rcu/rcutorture.c
> @@ -1432,28 +1432,34 @@ static void rcutorture_one_extend(int *readstate, int newstate,
> /* First, put new protection in place to avoid critical-section gap. */
> if (statesnew & RCUTORTURE_RDR_BH)
> local_bh_disable();
> + if (statesnew & RCUTORTURE_RDR_RBH)
> + rcu_read_lock_bh();
> if (statesnew & RCUTORTURE_RDR_IRQ)
> local_irq_disable();
> if (statesnew & RCUTORTURE_RDR_PREEMPT)
> preempt_disable();
> - if (statesnew & RCUTORTURE_RDR_RBH)
> - rcu_read_lock_bh();
> if (statesnew & RCUTORTURE_RDR_SCHED)
> rcu_read_lock_sched();
> if (statesnew & RCUTORTURE_RDR_RCU)
> idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
>
> - /* Next, remove old protection, irq first due to bh conflict. */
> + /*
> + * Next, remove old protection, in decreasing order of strength
> + * to avoid unlock paths that aren't safe in the stronger
> + * context. Namely: BH can not be enabled with disabled interrupts.
> + * Additionally PREEMPT_RT requires that BH is enabled in preemptible
> + * context.
> + */
> if (statesold & RCUTORTURE_RDR_IRQ)
> local_irq_enable();
> - if (statesold & RCUTORTURE_RDR_BH)
> - local_bh_enable();
> if (statesold & RCUTORTURE_RDR_PREEMPT)
> preempt_enable();
> - if (statesold & RCUTORTURE_RDR_RBH)
> - rcu_read_unlock_bh();
> if (statesold & RCUTORTURE_RDR_SCHED)
> rcu_read_unlock_sched();
> + if (statesold & RCUTORTURE_RDR_BH)
> + local_bh_enable();
> + if (statesold & RCUTORTURE_RDR_RBH)
> + rcu_read_unlock_bh();
> if (statesold & RCUTORTURE_RDR_RCU) {
> bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
>
> @@ -1496,6 +1502,9 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
> int mask = rcutorture_extend_mask_max();
> unsigned long randmask1 = torture_random(trsp) >> 8;
> unsigned long randmask2 = randmask1 >> 3;
> + unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
> + unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
> + unsigned long bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
>
> WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
> /* Mostly only one bit (need preemption!), sometimes lots of bits. */
> @@ -1503,11 +1512,26 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
> mask = mask & randmask2;
> else
> mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
> - /* Can't enable bh w/irq disabled. */
> - if ((mask & RCUTORTURE_RDR_IRQ) &&
> - ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
> - (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
> - mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
> +
> + /*
> + * Can't enable bh w/irq disabled.
> + */
> + if (mask & RCUTORTURE_RDR_IRQ)
> + mask |= oldmask & bhs;
> +
> + /*
> + * Ideally these sequences would be detected in debug builds
> + * (regardless of RT), but until then don't stop testing
> + * them on non-RT.
> + */
> + if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> + /* Can't modify BH in atomic context */
> + if (oldmask & preempts_irq)
> + mask &= ~bhs;
> + if ((oldmask | mask) & preempts_irq)
> + mask |= oldmask & bhs;
> + }
> +
> return mask ?: RCUTORTURE_RDR_RCU;
> }
>
> --
> 2.33.0
>