Re: [PATCH 1/5]: ufs: missed brelse and wrong baseblk
From: Evgeniy Dushistov
Date: Mon Jun 19 2006 - 09:11:08 EST
On Mon, Jun 19, 2006 at 08:32:29AM +0100, Al Viro wrote:
> On Mon, Jun 19, 2006 at 10:47:21AM +0400, Evgeniy Dushistov wrote:
> > On Sun, Jun 18, 2006 at 06:50:45PM +0100, Al Viro wrote:
> > > * block may be bigger than page. That can cause all sorts of fun
> > > problems in interaction with our VM, since allocation can affect more than
> > > one page and that has to be taken into account.
> >
> > In fact this is not a problem. Blocks in terms of linux VFS
> > is fragments in terms of UFS.
>
> And?
>
> > And if fragment >4096 we just don't mount such file system.
> >
> > So we can easily support 32K blocks.
>
> .... Which means that we have allocation unit (aka UFS block) covering
> several in-core pages. Which means that if you have a file with 8Kb
> hole in the beginning, mmap it shared and dirty the first couple of
> pages, you will get the situation when parallel writeout on those two
> pages will cause trouble. Both times you'll allocate full block (file
> is couple of megabytes long, so forget about partial blocks, relocations,
> etc.) And both will try to put the reference to what they'd allocated
> into the same inode as the first direct block. Do you see the problem?
No, I don't see, can you explain in detail how this affect current
implementation?
In case of 1k fragments, msync of two pages
cause 8 calls of ufs's get_block_t with create == 1,
they will be consequent because of synchronization.
Only the first one allocate block,
all other check if reference to block not 0, and just return
appropriate value. See for example ufs_inode_getblock:
p = (__fs32 *) bh->b_data + block;
tmp = fs32_to_cpu(sb, *p);
if (tmp) {
*phys = tmp + blockoff;
goto out;
and that's all, nothing will be allocated.
So this is abstract theory about some abstract implementation of UFS,
or you see some problems in code?
--
/Evgeniy
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/