File system compression, not at the block layer

From: Timothy Miller
Date: Fri Apr 23 2004 - 12:24:46 EST


This is probably just another of my silly "they already thought of that and someone is doing exactly this" ideas.

I get the impression that a lot of people interested in doing FS compression want to do it at the block layer. This gets complicated, because you need to allocate partial physical blocks.

Well, why not do the compression at the highest layer?

The idea is something akin to changing this (syntax variation intentional):

tar cf - somefiles* > file

To this:

tar cf - somefiles* | gzip > file

Except doing it transparently and for all files.

This way, the disk cache is all compressed data, and only decompressed as it's read or written by a process.

For files below a certain size, this is obviously pointless, since you can't save any space. But in many cases, this could speed up the I/O for large files that are compressable. (Space is cheap. The only reason to compress is for speed.)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/