Re: [00/17] Large Blocksize Support V3
From: David Chinner
Date: Fri Apr 27 2007 - 04:04:09 EST
On Fri, Apr 27, 2007 at 12:04:03AM -0700, Andrew Morton wrote:
> On Fri, 27 Apr 2007 16:09:21 +1000 David Chinner <dgc@xxxxxxx> wrote:
>
> > On Thu, Apr 26, 2007 at 10:15:28PM -0700, Andrew Morton wrote:
> > > On Fri, 27 Apr 2007 14:20:46 +1000 David Chinner <dgc@xxxxxxx> wrote:
> > >
> > > > > blocksizes via this scheme - instantiate and lock four pages and go for
> > > > > it.
> > > >
> > > > So now how do you get block aligned writeback?
> > >
> > > in writeback and pageout:
> > >
> > > if (page->index & mapping->block_size_mask)
> > > continue;
> >
> > So we might do writeback on one page in N - how do we
> > make sure none of the other pages are reclaimed while we are doing
> > writeback on this bclok?
>
> By marking them all dirty when one is marked dirty.
>
> David, you're perfectly capable of working all this out yourself. But
> you're trying not to. Please stop this game.
I've looked at all this but I'm trying to work out if anyone
else has looked at the impact of doing this. I have direct experience
with this form of block aggregation - this is pretty much what is
done in irix - and it's full of nasty, ugly corner cases.
I've got several year-old Irix bugs assigned that are hit every so
often where one page in the aggregated set has the wrong state, and
it's simply not possible to either reproduce the problem or work out
how it happened. The code has grown too complex and convoluted, and
by the time the problem is noticed (either by hang, panic or bug
check) the cause of it is long gone.
I don't want to go back to having to deal with this sort of problem
- I'd much prefer to have a design that does not make the same
mistakes that lead to these sorts of problem.
> > > > You basically have to
> > > > jump through nasty, nasty hoops, to handle corner cases that are introduced
> > > > because the generic code can no longer reliably lock out access to a
> > > > filesystem block.
> >
> > This way lies insanity.
>
> You're addressing Christoph's straw man here.
No, I'm speaking from years of experience working on a
page/buffer/chunk cache capable of using both large pages and
aggregating multiple pages. It has, at times, almost driven me
insane and I don't want to go back there.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/