[PATCHv2 9/9] zswap: add documentation
From: Seth Jennings
Date: Mon Jan 07 2013 - 15:25:15 EST
This patch adds the documentation file for the zswap functionality
Signed-off-by: Seth Jennings <sjenning@xxxxxxxxxxxxxxxxxx>
Documentation/vm/zswap.txt | 73 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 73 insertions(+)
create mode 100644 Documentation/vm/zswap.txt
diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt
new file mode 100644
@@ -0,0 +1,73 @@
+Zswap is a lightweight compressed cache for swap pages. It takes
+pages that are in the process of being swapped out and attempts to
+compress them into a dynamically allocated RAM-based memory pool.
+If this process is successful, the writeback to the swap device is
+deferred and, in many cases, avoided completely.Â This results in
+a significant I/O reduction and performance gains for systems that
+Zswap provides compressed swap caching that basically trades CPU cycles
+for reduced swap I/O.Â This trade-off can result in a significant
+performance improvement as reads to/writes from to the compressed
+cache almost always faster that reading from a swap device
+which incurs the latency of an asynchronous block I/O read.
+Some potential benefits:
+* Desktop/laptop users with limited RAM capacities can mitigate the
+ÂÂÂ performance impact of swapping.
+* Overcommitted guests that share a common I/O resource can
+ÂÂÂ dramatically reduce their swap I/O pressure, avoiding heavy
+ÂÂÂ handed I/O throttling by the hypervisor.Â This allows more work
+ÂÂÂ to get done with less impact to the guest workload and guests
+ÂÂÂ sharing the I/O subsystem
+* Users with SSDs as swap devices can extend the life of the device by
+ÂÂÂ drastically reducing life-shortening writes.
+Zswap evicts pages from compressed cache on an LRU basis to the backing
+swap device when the compress pool reaches it size limit or the pool is
+unable to obtain additional pages from the buddy allocator.Â This
+requirement had been identified in prior community discussions.
+To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
+Zswap receives pages for compression through the Frontswap API and
+is able to evict pages from its own compressed pool on an LRU basis
+and write them back to the backing swap device in the case that the
+compressed pool is full or unable to secure additional pages from
+the buddy allocator.
+Zswap makes use of zsmalloc for the managing the compressed memory
+pool. This is because zsmalloc is specifically designed to minimize
+fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each
+allocation in zsmalloc is not directly accessible by address.
+Rather, a handle is return by the allocation routine and that handle
+must be mapped before being accessed. The compressed memory pool grows
+on demand and shrinks as compressed pages are freed. The pool is
+When a swap page is passed from frontswap to zswap, zswap maintains
+a mapping of the swap entry, a combination of the swap type and swap
+offset, to the zsmalloc handle that references that compressed swap
+page. This mapping is achieved with a red-black tree per swap type.
+The swap offset is the search key for the tree nodes.
+Zswap seeks to be simple in its policies. Sysfs attributes allow for
+two user controlled policies:
+* max_compression_ratio - Maximum compression ratio, as as percentage,
+ for an acceptable compressed page. Any page that does not compress
+ by at least this ratio will be rejected.
+* max_pool_percent - The maximum percentage of memory that the compressed
+ pool can occupy.
+Zswap allows the compressor to be selected at kernel boot time by
+setting the âcompressorâ attribute. The default compressor is lzo.
+A debugfs interface is provided for various statistic about pool size,
+number of pages stored, and various counters for the reasons pages
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/