Re: ext2 compression: How about using the Netware principle?

From: Pavel Machek (pavel@suse.cz)
Date: Wed Nov 22 2000 - 08:29:23 EST


Hi!

> - A file is saved to disk
> - If the file isn't touched (read or written to) within <n> days
> (default 14), the file is compressed.
> - If the file isn't compressed more than <n> percent (default 20), the
> file is flagged "can't compress".
> - All file compression is done on low traffic times (default between
> 00:00 and 06:00 hours)
> - The first time a file is read or written to within the <n> days
> interval mentioned above, the file is addressed using realtime
> compression. The second time, the file is decompressed and commited to
> disk (uncompressed).

Oops, that means that merely reading a file followed by powerfail can
lead to you loosing the file. Oops.

Besides: you can do this in userspace with existing e2compr. Should take
less than 2 days to implement.

> Results:
> A minimum of CPU time is wasted compressing/decompressing files.
> The average server I've been out working with have an effective
> compression of somewhere between 30 and 100 per cent.

Results: NOP at machines that are never on in that time, random corruption
after powerfail between 0:00-6:00, .. Pavel

-- 
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Nov 23 2000 - 21:00:25 EST