Re: [Patch v2] rcu: simplify the calculation of rcu_state.ncpus
From: Paul E. McKenney
Date: Sun Apr 19 2020 - 21:42:57 EST
On Sun, Apr 19, 2020 at 09:57:15PM +0000, Wei Yang wrote:
> There is only 1 bit set in mask, which means the difference between
> oldmask and the new one would be at the position where the bit is set in
> mask.
>
> Based on this knowledge, rcu_state.ncpus could be calculated by checking
> whether mask is already set in rnp->expmaskinitnext.
>
> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx>
Queued, thank you!
I updated the commit log as shown below, so please let me know if I
messed something up.
Thanx, Paul
------------------------------------------------------------------------
commit 2ff1b8268456b1a476f8b79672c87d32d4f59024
Author: Wei Yang <richard.weiyang@xxxxxxxxx>
Date: Sun Apr 19 21:57:15 2020 +0000
rcu: Simplify the calculation of rcu_state.ncpus
There is only 1 bit set in mask, which means that the only difference
between oldmask and the new one will be at the position where the bit is
set in mask. This commit therefore updates rcu_state.ncpus by checking
whether the bit in mask is already set in rnp->expmaskinitnext.
Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx>
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f288477..6d39485 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3732,10 +3732,9 @@ void rcu_cpu_starting(unsigned int cpu)
{
unsigned long flags;
unsigned long mask;
- int nbits;
- unsigned long oldmask;
struct rcu_data *rdp;
struct rcu_node *rnp;
+ bool newcpu;
if (per_cpu(rcu_cpu_started, cpu))
return;
@@ -3747,12 +3746,10 @@ void rcu_cpu_starting(unsigned int cpu)
mask = rdp->grpmask;
raw_spin_lock_irqsave_rcu_node(rnp, flags);
WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
- oldmask = rnp->expmaskinitnext;
+ newcpu = !(rnp->expmaskinitnext & mask);
rnp->expmaskinitnext |= mask;
- oldmask ^= rnp->expmaskinitnext;
- nbits = bitmap_weight(&oldmask, BITS_PER_LONG);
/* Allow lockless access for expedited grace periods. */
- smp_store_release(&rcu_state.ncpus, rcu_state.ncpus + nbits); /* ^^^ */
+ smp_store_release(&rcu_state.ncpus, rcu_state.ncpus + newcpu); /* ^^^ */
ASSERT_EXCLUSIVE_WRITER(rcu_state.ncpus);
rcu_gpnum_ovf(rnp, rdp); /* Offline-induced counter wrap? */
rdp->rcu_onl_gp_seq = READ_ONCE(rcu_state.gp_seq);