Re: [RFC] new ->perform_write fop

From: Jan Kara
Date: Fri May 21 2010 - 14:59:02 EST


On Fri 21-05-10 09:05:24, Dave Chinner wrote:
> On Thu, May 20, 2010 at 10:12:32PM +0200, Jan Kara wrote:
> > Hmm, I was thinking about it and I see two options how to get out
> > of problems:
> > a) Filesystems which are not able to handle hole punching will allow
> > multipage writes only after EOF (which can be easily undone by
> > truncate in case of failure). That should actually cover lots of
> > cases we are interested in (I don't expect multipage writes to holes
> > to be a common case).
>
> multipage writes to holes is a relatively common operation in the
> HPC space that XFS is designed for (e.g. calculations on huge sparse
> matrices), so I'm not really fond of this idea....
Well, XFS could still handle them because it is able to do hole punching
but I get your point.

> > b) E.g. ext4 can do even without hole punching. It can allocate extent
> > as 'unwritten' and when something during the write fails, it just
> > leaves the extent allocated and the 'unwritten' flag makes sure that
> > any read will see zeros. I suppose that other filesystems that care
> > about multipage writes are able to do similar things (e.g. btrfs can
> > do the same as far as I remember, I'm not sure about gfs2).
>
> Allocating multipage writes as unwritten extents turns off delayed
> allocation and hence we'd lose all the benefits that this gives...
Ah, sorry. That was a short-circuit in my brain. But when we do delayed
I don't see why we should actually do any hole punching... The write needs
to:
a) reserve enough blocks for the write - I don't know about other
filesystems but for ext4 this means just incrementing a counter.
b) copy data page by page
c) release part of reservation (i.e. decrement counter) if we actually
copied less than we originally thought.

Am I missing something?

Honza
--
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/