Re: [PATCH v7 05/23] locking/mutex: Rework task_struct::blocked_on

From: John Stultz
Date: Thu Dec 21 2023 - 12:53:15 EST


On Thu, Dec 21, 2023 at 2:13 AM Metin Kaya <metin.kaya@xxxxxxx> wrote:
> On 20/12/2023 12:18 am, John Stultz wrote:
> > +static inline struct mutex *get_task_blocked_on(struct task_struct *p)
> > +{
> > + lockdep_assert_held(&p->blocked_lock);
> > +
> > + return p->blocked_on;
> > +}
> > +
> > +static inline struct mutex *get_task_blocked_on_once(struct task_struct *p)
> > +{
> > + return READ_ONCE(p->blocked_on);
> > +}
>
> These functions make me think we should use [get, set]_task_blocked_on()
> for accessing blocked_on & blocked_on_state fields, but there are some
> references in this patch which we directly access aforementioned fields.
> Is this OK?

Yeah. In the reworks I've probably added some subtle uses that should
be switched to the accessors or better commented.


> @@ -4341,6 +4342,11 @@ int try_to_wake_up(struct task_struct *p,
unsigned int state, int wake_flags)
> >
> > ttwu_queue(p, cpu, wake_flags);
> > }
> > + /* XXX can we do something better here for !CONFIG_SCHED_PROXY_EXEC case */
>
> blocked_on* fields are now used even in !CONFIG_SCHED_PROXY_EXEC case.
> I'm unsure if we can get rid of lock & unlock lines or entire hunk, but
> would this be too ugly? I wish we could convert blocked_on_state to an
> atomic variable.

Well, atomics have their own costs, but it's something I'll think
about. In the comment above, the idea I'm pondering is that in the
!PROXY_EXEC case the blocked_on_state doesn't provide much utility, so
maybe there's a way to opt out of that portion (while keeping the
blocked_on for debug checking). Even in the PROXY_EXEC case, we might
be able to move this check to proxy_needs_return(), but I need to
think the logic out to make sure we'd always hit that path when we
need to make the transition.

I've also wondered if the blocked_on_state might be able to be merged
into the task->__state, but the rules there are more subtle so for my
sanity I've kept it separate here for now.

thanks
-john