Re: [RFC] new ->perform_write fop

From: Dave Chinner
Date: Fri May 21 2010 - 20:31:49 EST


On Fri, May 21, 2010 at 09:50:54AM -0400, Josef Bacik wrote:
> On Fri, May 21, 2010 at 09:05:24AM +1000, Dave Chinner wrote:
> > On Thu, May 20, 2010 at 10:12:32PM +0200, Jan Kara wrote:
> > > On Thu 20-05-10 09:50:54, Dave Chinner wrote:
> > > b) E.g. ext4 can do even without hole punching. It can allocate extent
> > > as 'unwritten' and when something during the write fails, it just
> > > leaves the extent allocated and the 'unwritten' flag makes sure that
> > > any read will see zeros. I suppose that other filesystems that care
> > > about multipage writes are able to do similar things (e.g. btrfs can
> > > do the same as far as I remember, I'm not sure about gfs2).
> >
> > Allocating multipage writes as unwritten extents turns off delayed
> > allocation and hence we'd lose all the benefits that this gives...
> >
>
> I just realized we have another problem, the mmap_sem/page_lock deadlock.
> Currently BTRFS is susceptible to this, since we don't prefault any of the pages
> in yet. If we're going to do multi-page writes we're going to need to have a
> way to fault in all of the iovec's at once, so when we do the
> pagefault_disable() copy pagefault_enable() we don't just end up copying the
> first iovec. Nick have you done something like this already?

I have patches that already do this, but the big issue is that it is
inhernetly racy. The prefaulting does not guarantee that by the time
we disable page faults that the prefaulted page has not already been
reclaimed. Basically we have to design for EFAULT occurring because
even pre-faultig does not prevent it from occurring.

> If not I assume
> I can just loop through all the iovec's and call fault_in_pages_readable on all
> of them and be good to go right? Thanks,

That's effectively what I've done, but it's still no guarantee.

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/