Re: [PATCH RFC v1 1/2] rcu/tree: Add basic support for kfree_rcu batching
From: Paul E. McKenney
Date: Fri Aug 09 2019 - 16:42:25 EST
On Fri, Aug 09, 2019 at 04:22:26PM -0400, Joel Fernandes wrote:
> On Fri, Aug 09, 2019 at 09:33:46AM -0700, Paul E. McKenney wrote:
> > On Fri, Aug 09, 2019 at 11:39:24AM -0400, Joel Fernandes wrote:
> > > On Fri, Aug 09, 2019 at 08:16:19AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Aug 08, 2019 at 07:30:14PM -0400, Joel Fernandes wrote:
> > > [snip]
> > > > > > But I could make it something like:
> > > > > > 1. Letting ->head grow if ->head_free busy
> > > > > > 2. If head_free is busy, then just queue/requeue the monitor to try again.
> > > > > >
> > > > > > This would even improve performance, but will still risk going out of memory.
> > > > >
> > > > > It seems I can indeed hit an out of memory condition once I changed it to
> > > > > "letting list grow" (diff is below which applies on top of this patch) while
> > > > > at the same time removing the schedule_timeout(2) and replacing it with
> > > > > cond_resched() in the rcuperf test. I think the reason is the rcuperf test
> > > > > starves the worker threads that are executing in workqueue context after a
> > > > > grace period and those are unable to get enough CPU time to kfree things fast
> > > > > enough. But I am not fully sure about it and need to test/trace more to
> > > > > figure out why this is happening.
> > > > >
> > > > > If I add back the schedule_uninterruptibe_timeout(2) call, the out of memory
> > > > > situation goes away.
> > > > >
> > > > > Clearly we need to do more work on this patch.
> > > > >
> > > > > In the regular kfree_rcu_no_batch() case, I don't hit this issue. I believe
> > > > > that since the kfree is happening in softirq context in the _no_batch() case,
> > > > > it fares better. The question then I guess is how do we run the rcu_work in a
> > > > > higher priority context so it is not starved and runs often enough. I'll
> > > > > trace more.
> > > > >
> > > > > Perhaps I can also lower the priority of the rcuperf threads to give the
> > > > > worker thread some more room to run and see if anything changes. But I am not
> > > > > sure then if we're preparing the code for the real world with such
> > > > > modifications.
> > > > >
> > > > > Any thoughts?
> > > >
> > > > Several! With luck, perhaps some are useful. ;-)
> > > >
> > > > o Increase the memory via kvm.sh "--memory 1G" or more. The
> > > > default is "--memory 500M".
> > >
> > > Thanks, this definitely helped.
>
> Also, I can go back to 500M if I just keep KFREE_DRAIN_JIFFIES at HZ/50. So I
> am quite happy about that. I think I can declare that the "let list grow
> indefinitely" design works quite well even with an insanely heavily loaded
> case of every CPU in a 16CPU system with 500M memory, indefinitely doing
> kfree_rcu()in a tight loop with appropriate cond_resched(). And I am like
> thinking - wow how does this stuff even work at such insane scales :-D
A lot of work by a lot of people over a long period of time. On their
behalf, I thank you for the implied compliment. So once this patch gets
in, perhaps you will have complimented yourself as well. ;-)
But more work is needed, and will continue to be as new workloads,
compiler optimizations, and hardware appears. And it would be good to
try this on a really big system at some point.
> > > > o Leave a CPU free to run things like the RCU grace-period kthread.
> > > > You might also need to bind that kthread to that CPU.
> > > >
> > > > o Alternatively, use the "rcutree.kthread_prio=" boot parameter to
> > > > boost the RCU kthreads to real-time priority. This won't do
> > > > anything for ksoftirqd, though.
> > >
> > > I will try these as well.
>
> kthread_prio=50 definitely reduced the probability of OOM but it still
> occurred.
OK, interesting.
> > > > o Along with the above boot parameter, use "rcutree.use_softirq=0"
> > > > to cause RCU to use kthreads instead of softirq. (You might well
> > > > find issues in priority setting as well, but might as well find
> > > > them now if so!)
> > >
> > > Doesn't think one actually reduce the priority of the core RCU work? softirq
> > > will always have higher priority than any there. So wouldn't that have the
> > > effect of not reclaiming things fast enough? (Or, in my case not scheduling
> > > the rcu_work which does the reclaim).
> >
> > For low kfree_rcu() loads, yes, it increases overhead due to the need
> > for context switches instead of softirq running at the tail end of an
> > interrupt. But for high kfree_rcu() loads, it gets you realtime priority
> > (in conjunction with "rcutree.kthread_prio=", that is).
>
> I meant for high kfree_rcu() loads, a softirq context executing RCU callback
> is still better from the point of view of the callback running because the
> softirq will run above all else (higher than the highest priority task) so
> use_softirq=0 would be a down grade from that perspective if something higher
> than rcutree.kthread_prio is running on the CPU. So unless kthread_prio is
> set to the highest prio, then softirq running would work better. Did I miss
> something?
Under heavy load, softirq stops running at the tail end of interrupts and
is instead run within the context of a per-CPU ksoftirqd kthread. At normal
SCHED_OTHER priority.
> > > > o With any of the above, invoke rcu_momentary_dyntick_idle() along
> > > > with cond_resched() in your kfree_rcu() loop. This simulates
> > > > a trip to userspace for nohz_full CPUs, so if this helps for
> > > > non-nohz_full CPUs, adjustments to the kernel might be called for.
>
> I did not try this yet. But I am thinking why would this help in nohz_idle
> case? In nohz_idle we already have the tick active when CPU is idle. I guess
> it is because there may be a long time that elapses before
> rcu_data.rcu_need_heavy_qs == true ?
Under your heavy rcuperf load, none of the CPUs would ever be idle. Nor
would they every be in nohz_full userspace context, either.
In contrast, a heavy duty userspace-driven workload would transition to
and from userspace for each kfree_rcu(), and that would increment the
dyntick-idle count on each transition to and from userspace. Adding the
rcu_momentary_dyntick_idle() emulates a pair of such transitions.
Thanx, Paul
> > > Ok, will try it.
> > >
> > > Save these bullet points for future reference! ;-) thanks,
> >
> > I guess this is helping me to prepare for Plumbers. ;-)
>
> :-)
>
> thanks, Paul!
>
> - Joel
>