Re: [PATCH v7 02/11] rcu: Make call_rcu() lazy to save power
From: Uladzislau Rezki
Date: Wed Oct 05 2022 - 07:29:06 EST
On Tue, Oct 04, 2022 at 11:27:37AM -0700, Paul E. McKenney wrote:
> On Tue, Oct 04, 2022 at 06:20:03PM +0200, Uladzislau Rezki wrote:
> > On Tue, Oct 04, 2022 at 08:58:14AM -0700, Paul E. McKenney wrote:
> > > On Tue, Oct 04, 2022 at 04:53:09PM +0200, Uladzislau Rezki wrote:
> > > > On Tue, Oct 04, 2022 at 06:30:04AM -0700, Paul E. McKenney wrote:
> > > > > On Tue, Oct 04, 2022 at 01:41:38PM +0200, Uladzislau Rezki wrote:
> > > > > > > trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("Check"));
> > > > > > > rcu_nocb_lock_irqsave(rdp, flags);
> > > > > > > lockdep_assert_held(&rdp->nocb_lock);
> > > > > > > bypass_ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass);
> > > > > > > - if (bypass_ncbs &&
> > > > > > > + lazy_ncbs = READ_ONCE(rdp->lazy_len);
> > > > > > > +
> > > > > > > + if (bypass_ncbs && (lazy_ncbs == bypass_ncbs) &&
> > > > > > > + (time_after(j, READ_ONCE(rdp->nocb_bypass_first) + jiffies_till_flush) ||
> > > > > > > + bypass_ncbs > 2 * qhimark)) {
> > > > > > Do you know why we want double "qhimark" threshold? It is not only this
> > > > > > place, there are several. I am asking because it is not expected by the
> > > > > > user.
> > > > >
> > > > > OK, I will bite... What does the user expect? Or, perhaps a better
> > > > > question, how is this choice causing the user problems?
> > > > >
> > > > Yesterday when i was checking the lazy-v6 on Android i noticed the following:
> > > >
> > > > <snip>
> > > > ...
> > > > rcuop/4-48 [006] d..1 184.780328: rcu_batch_start: rcu_preempt CBs=15572 bl=121
> > > > rcuop/6-62 [000] d..1 184.796939: rcu_batch_start: rcu_preempt CBs=21503 bl=167
> > > > rcuop/6-62 [003] d..1 184.800706: rcu_batch_start: rcu_preempt CBs=24677 bl=192
> > > > rcuop/6-62 [005] d..1 184.803773: rcu_batch_start: rcu_preempt CBs=27117 bl=211
> > > > rcuop/6-62 [005] d..1 184.805732: rcu_batch_start: rcu_preempt CBs=22391 bl=174
> > > > rcuop/6-62 [005] d..1 184.809083: rcu_batch_start: rcu_preempt CBs=12554 bl=98
> > > > rcuop/6-62 [005] d..1 184.824228: rcu_batch_start: rcu_preempt CBs=16177 bl=126
> > > > rcuop/4-48 [006] d..1 184.836193: rcu_batch_start: rcu_preempt CBs=24129 bl=188
> > > > rcuop/4-48 [006] d..1 184.844147: rcu_batch_start: rcu_preempt CBs=25854 bl=201
> > > > rcuop/4-48 [006] d..1 184.847257: rcu_batch_start: rcu_preempt CBs=21328 bl=166
> > > > rcuop/4-48 [006] d..1 184.852128: rcu_batch_start: rcu_preempt CBs=21710 bl=169
> > > > ...
> > > > <snip>
> > > >
> > > > On my device the "qhimark" is set to:
> > > >
> > > > <snip>
> > > > XQ-CT54:/sys/module/rcutree/parameters # cat qhimark
> > > > 10000
> > > > XQ-CT54:/sys/module/rcutree/parameters #
> > > > <snip>
> > > >
> > > > so i expect that once we pass 10 000 callbacks threshold the flush
> > > > should occur. This parameter gives us an opportunity to control a
> > > > memory that should be reclaimed sooner or later.
> > >
> > > I did understand that you were surprised.
> > >
> > > But what problem did this cause other than you being surprised?
> > >
> > It is not about surprising. It is about expectation. So if i set a
> > threshold to 100 i expect it that around 100 callbacks my memory will
> > be reclaimed. But the resolution is 2 * 100 in fact.
> >
> > I am not aware about any issues with it. I just noticed such behaviour
> > during testing.
>
> Whew!!!
>
> This value was arrived at when tuning this code to best deal with callback
> floods.
>
Actually the "qhimark" is correctly handled by the caller and flush is
initiated exactly after what we have in the(as one of the conditions):
/sys/module/rcutree/parameters/qhimark
<snip>
if ((ncbs && j != READ_ONCE(rdp->nocb_bypass_first)) ||
ncbs >= qhimark) {
rcu_nocb_lock(rdp);
<snip>
it is not doubled. I messed it up with another place where you double it:
<snip>
if (bypass_ncbs &&
(time_after(j, READ_ONCE(rdp->nocb_bypass_first) + 1) ||
bypass_ncbs > 2 * qhimark)) {
<snip>
it is in the nocb_gp_wait(). Indeed, it is needed if we have a real flood
scenario.
Thanks!
--
Uladzislau Rezki