Re: [PATCH 1/5] locking: Add rwsem_is_write_locked()

From: Peter Zijlstra
Date: Fri Sep 08 2023 - 06:44:55 EST


On Fri, Sep 08, 2023 at 09:00:08AM +1000, Dave Chinner wrote:

> > Right, but if you're not the lock owner, your answer to the question is
> > a dice-roll, it might be locked, it might not be.
>
> Except that the person writing the code knows the call chain that
> leads up to that code, and so they have a pretty good idea whether
> the object should be locked or not. If we are running that code, and
> the object is locked, then it's pretty much guaranteed that the
> owner of the lock is code that executed the check, because otherwise
> we have a *major lock implementation bug*.

Agreed, and this is fine. However there's been some very creative
'use' of the _is_locked() class of functions in the past that did not
follow 'common' sense.

If all usage was: I should be holding this, lets check. I probably
wouldn't have this bad feeling about things.

> > Most devs should run with lockdep on when writing new code, and I know
> > the sanitizer robots run with lockdep on.
> >
> > In general there seems to be a ton of lockdep on coverage.
>
> *cough*
>
> Bit locks, semaphores, and all sorts of other constructs for IO
> serialisation (like inode_dio_wait()) have no lockdep coverage at
> all. IOWs, large chunks of many filesystems, the VFS and the VM have
> little to no lockdep coverage at all.

True, however I was commenting on the assertion that vm code has
duplicate asserts with the implication that was because not a lot of
people run with lockdep on.

> > > we also have VM_BUG_ON_MM(!rwsem_is_write_locked(&mm->mmap_lock), mm)
> > > to give us a good assertion when lockdep is disabled.
> >
> > Is that really worth it still? I mean, much of these assertions pre-date
> > lockdep.
>
> And we're trying to propagate them because lockdep isn't a viable
> option for day to day testing of filesystems because of it's
> overhead vs how infrequently it finds new problems.

... in XFS. Lockdep avoids a giant pile of broken from entering the
kernel and the robots still report plenty.

> > > XFS has a problem with using lockdep in general, which is that a worker
> > > thread can be spawned and use the fact that the spawner is holding the
> > > lock. There's no mechanism for the worker thread to ask "Does struct
> > > task_struct *p hold the lock?".
> >
> > Will be somewhat tricky to make happen -- but might be doable. It is
> > however an interface that is *very* hard to use correctly. Basically I
> > think you want to also assert that your target task 'p' is blocked,
> > right?
> >
> > That is: assert @p is blocked and holds @lock.
>
> That addresses the immediate symptom; it doesn't address the large
> problem with lockdep and needing non-owner rwsem semantics.
>
> i.e. synchronous task based locking models don't work for
> asynchronous multi-stage pipeline processing engines like XFS. The
> lock protects the data object and follows the data object through
> the processing pipeline, whilst the original submitter moves on to
> the next operation to processes without blocking.
>
> This is the non-blocking, async processing model that io_uring
> development is pushing filesystems towards, so assuming that we only
> hand a lock to a single worker task and then wait for it complete
> (i.e. synchronous operation) flies in the face of current
> development directions...

I was looking at things from an interface abuse perspective. How easy is
it to do the wrong thing. As said, we've had a bunch of really dodgy
code with the _is_locked class of functions, hence my desire to find
something else.

As to the whole non-owner locking, yes, that's problematic. I'm not
convinced async operations require non-owner locking, at the same time I
do see that IO completions pose a challence.

Coming from the schedulability and real-time corner, non-owner locks are
a nightmare because of the inversions. So yeah, fun to be had I'm sure.