Re: [RFC] Heads up on sys_fallocate()

From: Arnd Bergmann
Date: Sun Mar 04 2007 - 19:46:37 EST


On Monday 05 March 2007, Anton Altaparmakov wrote:
> An alternative would be to allocate blocks and then when the data is  
> written perform the compression and free any blocks you do not need  
> any more because the data has shrunk sufficiently.  Depending on the  
> implementation details this could potentially create horrible  
> fragmentation as you would allocate a large consecutive region and  
> then go and drop random blocks from that region thus making the file  
> fragmented.

Unfortunately, this is not as easy on logfs, because there is no point
in allocating a block when there is no data to write into it. Fragmentation
on flash media is free, but you can never modify a block in place without
erasing it first. This means it will always be written to a new location
on the next write access.

One option that might work (similar to what you describe in your other mail)
is to have a per-inode count of reserved blocks, without allocating specific
blocks for them. The journal then needs to maintain the number of total
reserved blocks for all files and keep that in sync with blocks that were
reserved for specific inodes.

Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/