[RFC][PATCH 0/4] zram: add zlib compression bckend support
From: Sergey Senozhatsky
Date: Thu Aug 13 2015 - 09:56:35 EST
I'll just post this series as a separate thread, I guess, sorry if it makes
any inconvenience. Joonsoo will resend his patch series, so discussions
will `relocate' anyway.
This patchset uses a different, let's say traditional, zram/zcomp approach.
it defines a new zlib compression backend same way as lzo ad lz4 are defined.
The key difference is that zlib requires zstream for both compression and
decompression. zram has stream-less decompression path for lzo and lz4, and
it works perfectly fast. In order to support zlib we need decompression
path to *optionally require* zstream. I want to make ZCOMP_NEED_READ_ZSTRM
flag (backend requires zstream for decompression) backend dependent; so we
still will have fastest lzo/lz4 possible.
This is one of the reasons I didn't implement it using crypto api -- crypto
api requires tfm for compression and decompression. Which implies that read
a) has to share idle streams list with write path, thus reads and writes will
b) has to define its own idle stream list. but it does
1) limit the number of concurrently executed read operations (to the number
of stremas in the list)
2) increase memory usage by the module (each streams occupies pages for
workspace buffers, etc.)
For the time being, crypto API does not provide stream-less decompression
functions, to the best of my knowledge.
I, frankly, tempted to rewrite zram to use crypto several times. But each
time I couldn't find a real reason. Yes, it *in theory* will give people
HUGE possibilities to select compression algorithms. But the question
is -- zram has been around for quite some years, so does anybody need this
flexibility? I can easily picture people selecting between
ratio speed alg
OK compression ratio very fast LZO/LZ4
very good comp ratio eh... but good comp ratio zlib
But anything in the middle is just anything in the middle, IMHO. I can't
convince myself that people really want to have
"eh... comp ration" + "eh.. speed"
comp algorithm, for example.
>From https://code.google.com/p/lz4/ it seems that lzo+lz4+zlib is quite a
And zram obviously was missing the `other side' algorithm -- zlib, when IO speed
is not SO important.
I did some zlib backend testing. A copy paste from patch 0003:
Copy dir with the linux kernel to a zram device (du -sh 2.3G) and check
memory usage stats.
2522685440 1210486447 1230729216 0 1230729216 5461 0
2525872128 1713351248 1738387456 0 1738387456 4682 0
ZLIB uses 484+MiB less memory in the test.
Sergey Senozhatsky (4):
zram: introduce zcomp_backend flags callback
zram: extend zcomp_backend decompress callback
zram: add zlib backend
zram: enable zlib backend support
drivers/block/zram/Kconfig | 14 ++++-
drivers/block/zram/Makefile | 1 +
drivers/block/zram/zcomp.c | 30 +++++++++-
drivers/block/zram/zcomp.h | 12 +++-
drivers/block/zram/zcomp_lz4.c | 8 ++-
drivers/block/zram/zcomp_lzo.c | 8 ++-
drivers/block/zram/zcomp_zlib.c | 120 ++++++++++++++++++++++++++++++++++++++++
drivers/block/zram/zcomp_zlib.h | 17 ++++++
drivers/block/zram/zram_drv.c | 23 ++++++--
9 files changed, 222 insertions(+), 11 deletions(-)
create mode 100644 drivers/block/zram/zcomp_zlib.c
create mode 100644 drivers/block/zram/zcomp_zlib.h
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/