Re: softlockups in multi_cpu_stop

From: Jason Low
Date: Fri Mar 06 2015 - 22:53:26 EST


On Sat, 2015-03-07 at 11:39 +0800, Ming Lei wrote:
> On Sat, Mar 7, 2015 at 11:17 AM, Jason Low <jason.low2@xxxxxx> wrote:
> > On Sat, 2015-03-07 at 11:08 +0800, Ming Lei wrote:
> >> On Sat, Mar 7, 2015 at 10:56 AM, Jason Low <jason.low2@xxxxxx> wrote:
> >> > On Sat, 2015-03-07 at 10:10 +0800, Ming Lei wrote:
> >> >> On Sat, Mar 7, 2015 at 10:07 AM, Davidlohr Bueso <dave@xxxxxxxxxxxx> wrote:
> >> >> > On Sat, 2015-03-07 at 09:55 +0800, Ming Lei wrote:
> >> >> >> On Fri, 06 Mar 2015 14:15:37 -0800
> >> >> >> Davidlohr Bueso <dave@xxxxxxxxxxxx> wrote:
> >> >> >>
> >> >> >> > On Fri, 2015-03-06 at 13:12 -0800, Jason Low wrote:
> >> >> >> > > In owner_running() there are 2 conditions that would make it return
> >> >> >> > > false: if the owner changed or if the owner is not running. However,
> >> >> >> > > that patch continues spinning if there is a "new owner" but it does not
> >> >> >> > > take into account that we may want to stop spinning if the owner is not
> >> >> >> > > running (due to getting rescheduled).
> >> >> >> >
> >> >> >> > So you're rationale is that we're missing this need_resched:
> >> >> >> >
> >> >> >> > while (owner_running(sem, owner)) {
> >> >> >> > /* abort spinning when need_resched */
> >> >> >> > if (need_resched()) {
> >> >> >> > rcu_read_unlock();
> >> >> >> > return false;
> >> >> >> > }
> >> >> >> > }
> >> >> >> >
> >> >> >> > Because the owner_running() would return false, right? Yeah that makes
> >> >> >> > sense, as missing a resched is a bug, as opposed to our heuristics being
> >> >> >> > so painfully off.
> >> >> >> >
> >> >> >> > Sasha, Ming (Cc'ed), does this address the issues you guys are seeing?
> >> >> >>
> >> >> >> For the xfstest lockup, what matters is that the owner isn't running, since
> >> >> >> the following simple change does fix the issue:
> >> >> >
> >> >> > I much prefer Jason's approach, which should also take care of the
> >> >> > issue, as it includes the !owner->on_cpu stop condition to stop
> >> >> > spinning.
> >> >>
> >> >> But the check on owner->on_cpu should be moved outside the loop
> >> >> because new owner can be scheduled out too, right?
> >> >
> >> > We should keep the owner->on_cpu check inside the loop, otherwise we
> >> > could continue spinning if the owner is not running.
> >>
> >> So how about checking in this way outside the loop for avoiding the spin?
> >>
> >> if (owner)
> >> return owner->on_cpu;
> >
> > So these owner->on_cpu checks outside of the loop "fixes" the issue as
> > well, but I don't see the benefit of needing to guess why we break out
> > of the spin loop (which may make things less readable) and checking
> > owner->on_cpu duplicate times when one check is enough.
>
> I mean moving the check on owner->on_cpu outside loop, so there is
> only one check for both new and old owner. If it is inside loop,
> the check is only on old owner.
>
> That is correct to keep it inside loop if you guys are sure new
> owner can't be scheduled out, but better to add comment why
> it can't, looks no one explained yet.

The new owner can get rescheduled.

And if there's a new owner, then the spinner goes to
rwsem_spin_on_owner() again and checks the new owner's on_cpu.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/