Re: [PATCH] pstore: Revert pmsg_lock back to a normal mutex

From: Qais Yousef
Date: Fri Mar 03 2023 - 15:37:21 EST


On 03/03/23 14:38, Steven Rostedt wrote:
> On Fri, 3 Mar 2023 14:25:23 -0500
> Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
>
> > On Fri, Mar 3, 2023 at 1:37 PM Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
> > >
> > > On Fri, 3 Mar 2023 18:11:34 +0000
> > > Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > >
> > > > In the normal mutex's adaptive spinning, there is no check for if there is a
> > > > change in waiter AFAICS (ignoring ww mutex stuff for a second).
> > > >
> > > > I can see one may want to do that waiter-check, as spinning
> > > > indefinitely if the lock owner is on the CPU for too long may result in
> > > > excessing power burn. But normal mutex does not seem to do that.
> > > >
> > > > What makes the rtmutex spin logic different from normal mutex in this
> > > > scenario, so that rtmutex wants to do that but normal ones dont?
> > >
> > > Well, the point of the patch is that I don't think they should be different
> > > ;-)
> >
> > But there's no "waiter change" thing for mutex_spin_on_owner right.
> >
> > Then, should mutex_spin_on_owner() also add a call to
> > __mutex_waiter_is_first() ?
>
> Ah interesting, I missed the __mutex_waiter_is_first() in the mutex code,
> where it looks to do basically the same thing as rt_mutex (but slightly
> different). From looking at this, it appears that mutex() has FIFO fair
> logic, where the second waiter will sleep.
>
> Would be interesting to see why John sees such a huge difference between
> normal mutex and rtmutex if they are doing the same thing. One thing is
> perhaps the priority logic is causing the issue, where this will not
> improve anything.

I think that can be a good suspect. If the waiters are RT tasks the root cause
might be starvation issue due to bad priority setup and moving to FIFO just
happens to hide it.

For same priority RT tasks, we should behave as FIFO too AFAICS.

If there are a mix of RT vs CFS; RT will always win of course.

>
> I wonder if we add spinning to normal mutex for the other waiters if that
> would improve things or make them worse?

I see a potential risk depending on how long the worst case scenario for this
optimistic spinning.

RT tasks can prevent all lower priority RT and CFS from running.

CFS tasks will lose some precious bandwidth from their sched_slice() as this
will be accounted for them as RUNNING time even if they were effectively
waiting.


Cheers

--
Qais Yousef

>
> >
> > > > Another thought is, I am wondering if all of them spinning indefinitely might
> > > > be Ok for rtmutex as well, since as you mentioned, preemption is enabled. So
> > > > adding the if (top_waiter != last_waiter) {...} might be unnecessary? In fact
> > > > may be even harmful as you are disabling interrupts in the process.
> > >
> > > The last patch only does the interrupt disabling if the top waiter changes.
> > > Which in practice is seldom.
> > >
> > > But, I don't think a non top waiter should spin if the top waiter is not
> > > running. The top waiter is the one that will get the lock next. If the
> > > owner releases the lock and gives it to the top waiter, then it has to go
> > > through the wake up of the top waiter.
> >
> > Correct me if I'm wrong, but I don't think it will go through
> > schedule() after spinning, which is what adds the overhead for John.
>
> Only if the lock becomes free.
>
> >
> > > I don't see why a task should spin
> > > to save a wake up if a wake up has to happen anyway.
> >
> > What about regular mutexes, happens there too or no?
>
> Yes, but in a FIFO order, where in rt_mutex, a second, higher priority task
> can make the first ones sleep. So maybe it's just the priority logic that
> is causing the issues.
>
> >
> > > > Either way, I think a comment should go on top of the "if (top_waiter !=
> > > > waiter)" check IMO.
> > >
> > > What type of comment?
> >
> > Comment explaining why "if (top_waiter != waiter)" is essential :-).
>
> You mean, "Don't spin if the next owner is not on any CPU"?
>
> -- Steve