Re: [PATCH tip/core/rcu 4/7] rcu: Unify boost and kthread priorities
From: Paul E. McKenney
Date: Fri Oct 31 2014 - 12:42:41 EST
On Fri, Oct 31, 2014 at 05:22:10PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 29, 2014 at 09:16:02AM -0700, Paul E. McKenney wrote:
>
> > > Also, should we look at running this stuff as deadline in order to
> > > provide interference guarantees etc.. ?
> >
> > Excellent question! I have absolutely no idea what the answer might be.
> >
> > Taking the two sets of kthreads separately...
> >
> > rcub/N: This is for RCU priority boosting. In the preferred common case,
> > these never wake up ever. When they do wake up, all they do is
> > cause blocked RCU readers to get priority boosted. I vaguely
> > recall something about inheritance of deadlines, which might
> > work here. One concern is what happens if the deadline is
> > violated, as this isn't really necessarily an error condition
> > in this case -- we don't know how long the RCU read-side critical
> > section will run once awakened.
>
> Yea, this one is 'hard'. How is this used today? From the previous email
> we've learnt that the default is FIFO-1, iow. it will preempt
> SCHED_OTHER but not much more. How is this used in RT systems, what are
> the criteria for actually changing this?
The old way is to update CONFIG_RCU_BOOST_PRIO and rebuild your kernel,
but a recent commit from Clark Williams provides a boot parameter that
allows this priority to be changed more conveniently.
> Increase until RCU stops spilling stalled warns, but not so far that
> your workload fails?
Well, you are supposed to determine the highest RT priority at which
your workload might run CPU-bound tasks, and set the boost priority
at some level above that. My model of RCU priority boosting is that
it should be used to make inadvertent high-priority infinite loops
easier to debug, but others might have different approaches.
> Not quite sure how to translate that into dl speak :-), the problem of
> course is that if a DL task starts to trigger the stalls we need to do
> something.
Indeed! ;-)
> > rcuc/N: This is the softirq replacement in -rt, but in mainline all it
> > does is invoke RCU callbacks. It might make sense to give it a
> > deadline of something like a few milliseconds, but we would need
> > to temper that if there were huge numbers of callbacks pending.
> > Or perhaps have it claim that its "unit of work" was some fixed
> > number of callbacks or emptying the list, whichever came first.
> > Or maybe have its "unit of work" also depend on the number of
> > callbacks pending.
>
> Right, so the problem is if we give it insufficient time it will never
> catch up on running the callbacks, ie. more will come in than we can
> process and get out.
Yep, which can result in OOM.
> So if it works by splicing a callback list to a local list, then runs
> until completion and then either immediately starts again if there's
> new work, or goes to sleep waiting for more, _then_ we can already
> assign it DL parameters with the only caveat being the above issue.
>
> The advantage being indeed that if there are 'many' callbacks pending,
> we'd only run a few, sleep, run a few more, etc.. due to the CBS until
> we're done. This smooths out peak interference at the 'cost' of
> additional delays in actually running the callbacks.
>
> We should be able to detect the case where more and work piles on and
> the actual running does not appear to catch up, but I'm not sure what to
> do about it, seeing how system stability is at risk.
I could imagine having a backup SCHED_FIFO task that handled the
case where callbacks were piling up, but synchronizing it with the
SCHED_DEADLINE task while avoiding callback misordering could be a bit
"interesting". (Recall that callback misordering messes up rcu_barrier().)
> Certainly something to think about..
No argument here! ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/