Re: [patch 7/8] fs: fix or note I_DIRTY handling bugs infilesystems

From: Nick Piggin
Date: Tue Jan 04 2011 - 01:05:07 EST

On Wed, Dec 29, 2010 at 10:01:09AM -0500, Christoph Hellwig wrote:
> As mentioned last round I think the exporting of inode_lock and pusing
> of the I_DIRTY* complexities into the filesystems can be avoided. See

Yes I did see that, I hoped to continue discussion of that detail.

Let me start out by saying OK I will agree to hold off that change
until inode_lock is removed at least, and concentrate on just the

However I strongly believe that filesystems should be able to access
and manipulate the inode dirty state directly. If you agree with that,
then I think they should be able to access the lock required for that.
Filesystems will want to keep their internal state in synch with vfs
visible state most likely (eg. like your hfsplus patches), and _every_
time we do "loose" coupling between state bits like this (eg. page and
buffer state; page and pte state; etc), it turns out to be a huge mess
of races and subtle code and ordering.

> the patch below, which compiles and passes xfstests for xfs, but
> otherwise isn't quite done yet. The only code change vs the opencoded
> variant in the patch is that we do a useless inode_lock roundtrip

I dislike this style, except where it has some real advantages like
get_block case. I prefer just to make the existing inode_writeback_begin
into a "__special" variant, and make inode_writeback_begin just do the
locking and masking for filesystems.

> for a non-dirty inode on gfs2, which is I think is acceptable,
> especially once we have the lock split anyway.

The bigger issue IMO is if filesystems want to be smarter with dirty
bit handling and keep more internal state in sync with it. I don't
see any problem at all with allowing them to lock the dirty state.
(but will hold off the patch for now, as said).


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at