Re: [PATCH] cleaned up slab allocator

From: Andi Kleen (ak@suse.de)
Date: Sun Jun 04 2000 - 12:46:43 EST


Manfred Spraul <manfreds@colorfullife.com> writes:

> I cleaned up the current slab allocator:
>
> * the object code is 25% shorter, the source file 220 lines.
> * better efficiency for cacheline sized objects.
> * fewer special cases, thus fewer branches: kmem_cache_alloc is down to
> 2 contitional branches, and one unconditional jump.
> * simpler solution for DMA pages: I create 2 general caches for each
> size, one of them for GFP_DMA pages, another one for normal pages.
> * GFP_HIGHMEM slabs are possible, but not yet implemented: add a new
> flag to kmem_cache_create, and force the slab descriptor off-slab.
>
> The patch is long, I uploaded it to
> http://colorfullife.com/~manfreds/slab
>
> The patch doesn't crash immediately, but it's WIP: I'll try to add
> per-cpu support on top of this patch.

Very cool. That was long overdue.

Here are some additional ideas of slab improvement that I wanted to implement
for ages, but never got around:

- Use a better set of default sizes for the constant sized slabs. Bonwick
mentions in the original paper that power of two slabs are close to s
uboptimal. Best would be to have a simple profiler where everybody could
generate the optimal set for the particular workload.

- Fix the reaction to memory pressure. The current measures (deleting
all caches on the slightest bit of memory pressure) is too radical. It would
be better to have high/low water marks and only delete some less recently
used caches first (the trick is to find a good solution that doesn't
impact the fast path)

- Fix the remaining high volume kmalloc users to use a cache. There
seems to be some subsystem left that uses a lot of <=32byte sized kmallocs.

-Andi

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Jun 07 2000 - 21:00:20 EST