Re: Question on task_blocks_on_rt_mutex()
From: Paul E. McKenney
Date: Thu Sep 03 2020 - 16:06:43 EST
On Wed, Sep 02, 2020 at 08:54:10AM -0700, Paul E. McKenney wrote:
> On Tue, Sep 01, 2020 at 06:51:28PM -0700, Davidlohr Bueso wrote:
> > On Tue, 01 Sep 2020, Paul E. McKenney wrote:
> >
> > > And it appears that a default-niced CPU-bound SCHED_OTHER process is
> > > not preempted by a newly awakened MAX_NICE SCHED_OTHER process. OK,
> > > OK, I never waited for more than 10 minutes, but on my 2.2GHz that is
> > > close enough to a hang for most people.
> > >
> > > Which means that the patch below prevents the hangs. And maybe does
> > > other things as well, firing rcutorture up on it to check.
> > >
> > > But is this indefinite delay expected behavior?
> > >
> > > This reproduces for me on current mainline as follows:
> > >
> > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --torture lock --duration 3 --configs LOCK05
> > >
> > > This hangs within a minute of boot on my setup. Here "hangs" is defined
> > > as stopping the per-15-second console output of:
> > > Writes: Total: 569906696 Max/Min: 81495031/63736508 Fail: 0
> >
> > Ok this doesn't seem to be related to lockless wake_qs then. fyi there have
> > been missed wakeups in the past where wake_q_add() fails the cmpxchg because
> > the task is already pending a wakeup leading to the actual wakeup ocurring
> > before its corresponding wake_up_q(). This is why we have wake_q_add_safe().
> > But for rtmutexes, because there is no lock stealing only top-waiter is awoken
> > as well as try_to_take_rt_mutex() is done under the lock->wait_lock I was not
> > seeing an actual race here.
>
> This problem is avoided if stutter_wait() does the occasional sleep.
> I would have expected preemption to take effect, but even setting the
> kthreads in stutter_wait() to MAX_NICE doesn't help. The current fix
> destroys intended instant-on nature of stutter_wait(), so the eventual
> fix will need to use hrtimer-based sleeps or some such.
And here is my current best shot at a workaround/fix/whatever. Thoughts?
Thanx, Paul
------------------------------------------------------------------------
commit d93a64389f4d544ded241d0ba30b2586497f5dc0
Author: Paul E. McKenney <paulmck@xxxxxxxxxx>
Date: Tue Sep 1 16:58:41 2020 -0700
torture: Periodically pause in stutter_wait()
Running locktorture scenario LOCK05 results in hangs:
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --torture lock --duration 3 --configs LOCK05
The lock_torture_writer() kthreads set themselves to MAX_NICE while
running SCHED_OTHER. Other locktorture kthreads run at default niceness,
also SCHED_OTHER. This results in these other locktorture kthreads
indefinitely preempting the lock_torture_writer() kthreads. Note that
the cond_resched() in the stutter_wait() function's loop is ineffective
because this scenario is built with CONFIG_PREEMPT=y.
It is not clear that such indefinite preemption is supposed to happen, but
in the meantime this commit prevents kthreads running in stutter_wait()
from being completely CPU-bound, thus allowing the other threads to get
some CPU in a timely fashion. This commit also uses hrtimers to provide
very short sleeps to avoid degrading the sudden-on testing that stutter
is supposed to provide.
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
diff --git a/kernel/torture.c b/kernel/torture.c
index 1061492..5488ad2 100644
--- a/kernel/torture.c
+++ b/kernel/torture.c
@@ -602,8 +602,11 @@ static int stutter_gap;
*/
bool stutter_wait(const char *title)
{
- int spt;
+ ktime_t delay;
+ unsigned i = 0;
+ int oldnice;
bool ret = false;
+ int spt;
cond_resched_tasks_rcu_qs();
spt = READ_ONCE(stutter_pause_test);
@@ -612,8 +615,17 @@ bool stutter_wait(const char *title)
if (spt == 1) {
schedule_timeout_interruptible(1);
} else if (spt == 2) {
- while (READ_ONCE(stutter_pause_test))
+ oldnice = task_nice(current);
+ set_user_nice(current, MAX_NICE);
+ while (READ_ONCE(stutter_pause_test)) {
+ if (!(i++ & 0xffff)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ delay = 10 * NSEC_PER_USEC;
+ schedule_hrtimeout(&delay, HRTIMER_MODE_REL);
+ }
cond_resched();
+ }
+ set_user_nice(current, oldnice);
} else {
schedule_timeout_interruptible(round_jiffies_relative(HZ));
}