Re: 2.3.x wish list?

Mark H. Wood (mwood@IUPUI.Edu)
Sat, 22 May 1999 13:23:32 -0500 (EST)


On Thu, 20 May 1999, Kevin M. Bealer wrote:
[snippage]
> We see an archetectural problem (FSCK takes to long to run), and
> are considering fixing it by making really big blocks. Now, I
> know there are other advantages, but every time the block size
> doubles, the fragmentation loss doubles. If we could stand the
> complexity, a buddy system for pieces over a small limit and under
> page-size might work, but I think we are fixing the wrong thing.

Assuming for a moment that we are *not* fixing the wrong thing: Novell
did something like this in Netware 4 with their "block suballocation".
You can set up a volume to be based on 4K, 16K, 32K, or 64K blocks, but if
you enable suballocation then tiny files get stuffed into the unused
tail-ends of the final blocks of large files. No, I don't know the
details of how they do it. It seems to work pretty well. I always set up
a Netware server for 64K blocks with suballocation unless I can think of a
specific reason why it won't perform well given the server's specific
tasks.

But I agree that taking a broader view of the problem is worthwhile.

-- 
Mark H. Wood, Lead System Programmer   mwood@IUPUI.Edu
Specializing in unusual perspectives for more than twenty years.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/