Re: Linux Kernel Dump Summit 2005

From: Pavel Machek
Date: Wed Oct 12 2005 - 04:10:33 EST


Hi!

> > CMD | NET TIME (in seconds) | OUTPUT SIZE (in bytes)
> > ---------+--------------------------------+------------------------
> > cp | 35.94 (usr 0.23, sys 14.16) | 2,121,438,352 (100.0%)
> > lzf | 54.30 (usr 35.04, sys 13.10) | 1,959,473,330 ( 92.3%)
> > gzip -1 | 200.36 (usr 186.84, sys 11.73) | 1,938,686,487 ( 91.3%)
> > ---------+--------------------------------+------------------------
> >
> > Although it is too early to say lzf's compress ratio is good
> > enough, its compression speed is impressive indeed.
>
> As you say, the speed of lzf relative to gzip is impressive.
>
> However if the properties of the kernel dump mean that it is not suitable for
> compression then surely it is not efficient to spend any time on it.
>
> >And the
> > result also suggests that it is too early to give up the idea of
> > full dump with compression.
>
> Are you sure? :-)
> If we are talking about systems with 32GB of memory then we must be taking
> about organisations who can afford an extra 100GB of disk space just for
> keeping their kernel dump files.
>
> I would expect that speed of recovery would always be the primary concern.
> Would you agree?

Notice that suspend2 project actually introduced compression *for
speed*. Doing it right means that it is faster to do it
compressed. See Jamie Lokier's description how to *never* slow down.

Pavel
--
if you have sharp zaurus hardware you don't need... you know my address
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/