Re: [PATCH 0/2] zcache: a new start for upstream

From: Bob Liu
Date: Wed Jul 24 2013 - 09:52:31 EST


Hi Seth,

On Mon, Jul 22, 2013 at 11:07 PM, Seth Jennings
<sjenning@xxxxxxxxxxxxxxxxxx> wrote:
> Sorry for the dup Bob, last reply only went to linux-mm
>
> On Sat, Jul 20, 2013 at 10:36:56PM +0800, Bob Liu wrote:
>> We already have zswap helps reducing the swap out/in IO operations by
>> compressing anon pages.
>> It has been merged into v3.11-rc1 together with the zbud allocation layer.
>>
>> However there is another kind of pages(clean file pages) suitable for
>> compression as well. Upstream has already merged its frontend(cleancache).
>> Now we are lacking of a backend of cleancache as zswap to frontswap.
>>
>> Furthermore, we need to balance the number of compressed anon and file pages,
>> E.g. it's unfair to normal file pages if zswap pool occupies too much memory for
>> the storage of compressed anon pages.
>>
>> Although the current version of zcache in staging tree has already done those
>> works mentioned above, the implementation is too complicated to be merged into
>> upstream.
>>
>> What I'm looking for is a new way for zcache towards upstream.
>> The first change is no more staging tree.
>> Second is implemented a simple cleancache backend at first, which is based on
>> the zbud allocation same as zswap.
>
> I like the approach of distilling zcache down to only page cache compression
> as a start.
>

Thank you for your review!

> However, there is still the unresolved issue of the streaming read regression.
> If the workload does streaming reads (i.e. reads from a set much larger than
> RAM and does no rereads), zcache will regress that workload because it will
> be compressing pages that will quickly be tossed out of the second chance
> cache too.
>
> This is a difficult problem when it comes to page cache compression: how
> to know whether a page will be used again. In the case of zswap, the
> page is persistent in memory and therefore MUST be maintained. With
> page cache compression, that isn't that case. There is the option to
> just toss it and reread from disk.

Probably we can add checking whether the file page used to at the active list!
Only putting reclaimed file pages which are from active list to cleancache!

Of course this way can't fix this problem totally, but I think we can
get a higher hit rate!

>
> The assumption is that keeping as many cached pages as possible, regardless
> of the overhead to do so, is always a win. But this is not always true.
>
>>
>> At the end, I hope we can combine the new cleancache backend with
>> zswap(frontswap backend), in order to have a generic in-kernel memory
>> compression solution in upstream.
>
> I don't see a need to combine them since, afaict, you'd really never use them
> at the same time as zswap (anon memory pressure in general) shreds the page
> cache and would aggressively shrink zcache to the point of uselessness.
>

Make sense, but is there any way to share the compression functions
and per-cpu functions?

>>
>> Bob Liu (2):
>> zcache: staging: %s/ZCACHE/ZCACHE_OLD
>> mm: zcache: core functions added
>>
>> drivers/staging/zcache/Kconfig | 12 +-
>> drivers/staging/zcache/Makefile | 4 +-
>> mm/Kconfig | 18 +
>> mm/Makefile | 1 +
>> mm/zcache.c | 840 +++++++++++++++++++++++++++++++++++++++
>> 5 files changed, 867 insertions(+), 8 deletions(-)
>> create mode 100644 mm/zcache.c
>
> No code?
>
> Seth

--
Regards,
--Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/