RE: [PATCH v2 0/3] staging: zcache: xcfmalloc support

From: Dan Magenheimer
Date: Thu Sep 15 2011 - 13:30:35 EST


> From: Seth Jennings [mailto:sjenning@xxxxxxxxxxxxxxxxxx]
> Subject: Re: [PATCH v2 0/3] staging: zcache: xcfmalloc support
>
> Hey Nitin,
>
> So this is how I see things...
>
> Right now xvmalloc is broken for zcache's application because
> of its huge fragmentation for half the valid allocation sizes
> (> PAGE_SIZE/2).

Um, I have to disagree here. It is broken for zcache for
SOME set of workloads/data, where the AVERAGE compression
is poor (> PAGE_SIZE/2).

> My xcfmalloc patches are _a_ solution that is ready now. Sure,
> it doesn't so compaction yet, and it has some metadata overhead.
> So it's not "ideal" (if there is such I thing). But it does fix
> the brokenness of xvmalloc for zcache's application.

But at what cost? As Dave Hansen pointed out, we still do
not have a comprehensive worst-case performance analysis for
xcfmalloc. Without that (and without an analysis over a very
large set of workloads), it is difficult to characterize
one as "better" than the other.

> So I see two ways going forward:
>
> 1) We review and integrate xcfmalloc now. Then, when you are
> done with your allocator, we can run them side by side and see
> which is better by numbers. If yours is better, you'll get no
> argument from me and we can replace xcfmalloc with yours.
>
> 2) We can agree on a date (sooner rather than later) by which your
> allocator will be completed. At that time we can compare them and
> integrate the best one by the numbers.
>
> Which would you like to do?

Seth, I am still not clear why it is not possible to support
either allocation algorithm, selectable at runtime. Or even
dynamically... use xvmalloc to store well-compressible pages
and xcfmalloc for poorly-compressible pages. I understand
it might require some additional coding, perhaps even an
ugly hack or two, but it seems possible.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/