Re: [PATCH v5 06/18] rcu: Introduce call_rcu_lazy() API implementation

From: Joel Fernandes
Date: Tue Sep 06 2022 - 12:57:33 EST




On 9/6/2022 12:38 PM, Joel Fernandes wrote:
>
>
> On 9/6/2022 12:31 PM, Joel Fernandes wrote:
>>
>>
>> On 9/6/2022 12:15 PM, Joel Fernandes wrote:
>>>>> @@ -461,16 +521,29 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
>>>>> // We need to use the bypass.
>>>>> rcu_nocb_wait_contended(rdp);
>>>>> rcu_nocb_bypass_lock(rdp);
>>>>> +
>>>>> ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass);
>>>>> rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */
>>>>> rcu_cblist_enqueue(&rdp->nocb_bypass, rhp);
>>>>> +
>>>>> + if (IS_ENABLED(CONFIG_RCU_LAZY) && lazy)
>>>>> + WRITE_ONCE(rdp->lazy_len, rdp->lazy_len + 1);
>>>>> +
>>>>> if (!ncbs) {
>>>>> WRITE_ONCE(rdp->nocb_bypass_first, j);
>>>>> trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstBQ"));
>>>>> }
>>>>> +
>>>>> rcu_nocb_bypass_unlock(rdp);
>>>>> smp_mb(); /* Order enqueue before wake. */
>>>>> - if (ncbs) {
>>>>> +
>>>>> + // We had CBs in the bypass list before. There is nothing else to do if:
>>>>> + // There were only non-lazy CBs before, in this case, the bypass timer
>>>> Kind of misleading. I would replace "There were only non-lazy CBs before" with
>>>> "There was at least one non-lazy CBs before".
>>> I really mean "There were only non-lazy CBs ever queued in the bypass list
>>> before". That's the bypass_is_lazy variable. So I did not fully understand your
>>> suggested comment change.
>>>
>>>>> + // or GP-thread will handle the CBs including any new lazy ones.
>>>>> + // Or, the new CB is lazy and the old bypass-CBs were also lazy. In this
>>>>> + // case the old lazy timer would have been setup. When that expires,
>>>>> + // the new lazy one will be handled.
>>>>> + if (ncbs && (!bypass_is_lazy || lazy)) {
>>>>> local_irq_restore(flags);
>>>>> } else {
>>>>> // No-CBs GP kthread might be indefinitely asleep, if so, wake.
>>>>> @@ -479,6 +552,10 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
>>>>> trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
>>>>> TPS("FirstBQwake"));
>>>>> __call_rcu_nocb_wake(rdp, true, flags);
>>>>> + } else if (bypass_is_lazy && !lazy) {
>>>>> + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
>>>>> + TPS("FirstBQwakeLazy2Non"));
>>>>> + __call_rcu_nocb_wake(rdp, true, flags);
>>>>
>>>> Not sure we need this chunk. Since there are pending callbacks anyway,
>>>> nocb_gp_wait() should be handling them and it will set the appropriate
>>>> timer on the next loop.
>>>
>>> We do because those pending callbacks could be because of a bypass list flush
>>> and not because there were pending CBs before, right? I do recall missed wake
>>> ups of non-lazy CBs, and them having to wait for the full lazy timer duration
>>> and slowing down synchronize_rcu() which is on the ChromeOS boot critical path!
>>>
>>
>> Just to add more details, consider the series of events:
>>
>> 1. Only lazy CBs are ever queued. Timer is armed for multiple seconds.
>> rcu_segcblist_pend_cbs remains false.
>>
>> 2. First non-lazy CB triggers to code that does the bypyass rate-limit thing.
>>
>> 3. By pass list is flushed because it is non-lazy CB and we need to start GP
>> processing soon.
>
> Correcting the events, #3 does not happen if we got here.
>
>>
>> 4. Due to flush, rcu_segcblist_pend_cbs() is now true.
>
> So rcu_segcblist_pend_cbs() cannot be true.
>
>> 5. We reach this "else if" clause because bypass_is_lazy means only lazy CBs
>> were ever buffered. We need to reprogram the timer or do an immediate wake up.
>> That's the intention of __call_rcu_nocb_wake().
>>
>> I really saw #1 and #2 trigger during boot up itself and cause a multi-second
>> boot regression.
>
> So may be this hunk is needed not needed any more and the boot regression is
> fine. I can try to drop this hunk and run the tests again...

Ah, now I know why I got confused. I *used* to flush the bypass list before when
!lazy CBs showed up. Paul suggested this is overkill. In this old overkill
method, I was missing a wake up which was likely causing the boot regression.
Forcing a wake up fixed that. Now in v5 I make it such that I don't do the flush
on a !lazy rate-limit.

I am sorry for the confusion. Either way, in my defense this is just an extra
bit of code that I have to delete. This code is hard. I have mostly relied on a
test-driven development. But now thanks to this review and I am learning the
code more and more...

Thanks,

- Joel