Re: RFC: hold i_rwsem until aio completes

From: Jason Gunthorpe
Date: Tue Jan 14 2020 - 14:27:05 EST


On Tue, Jan 14, 2020 at 05:12:13PM +0100, Christoph Hellwig wrote:
> Hi all,
>
> Asynchronous read/write operations currently use a rather magic locking
> scheme, were access to file data is normally protected using a rw_semaphore,
> but if we are doing aio where the syscall returns to userspace before the
> I/O has completed we also use an atomic_t to track the outstanding aio
> ops. This scheme has lead to lots of subtle bugs in file systems where
> didn't wait to the count to reach zero, and due to its adhoc nature also
> means we have to serialize direct I/O writes that are smaller than the
> file system block size.

I've seen similar locking patterns quite a lot, enough I've thought
about having a dedicated locking primitive to do it. It really wants
to be a rwsem, but as here the rwsem rules don't allow it.

The common pattern I'm looking at looks something like this:

'try begin read'() // aka down_read_trylock()

/* The lockdep release hackery you describe,
the rwsem remains read locked */
'exit reader'()

.. delegate unlock to work queue, timer, irq, etc ..

in the new context:

're_enter reader'() // Get our lockdep tracking back

'end reader'() // aka up_read()

vs a typical write side:

'begin write'() // aka down_write()

/* There is no reason to unlock it before kfree of the rwsem memory.
Somehow the user prevents any new down_read_trylock()'s */
'abandon writer'() // The object will be kfree'd with a locked writer
kfree()

The typical goal is to provide an object destruction path that can
serialize and fence all readers wherever they may be before proceeding
to some synchronous destruction.

Usually this gets open coded with some atomic/kref/refcount and a
completion or wait queue. Often implemented wrongly, lacking the write
favoring bias in the rwsem, and lacking any lockdep tracking on the
naked completion.

Not to discourage your patch, but to ask if we can make the solution
more broadly applicable?

Thanks,
Jason