Re: [PATCH 1/3] Slab infrastructure for array operations
From: Christoph Lameter
Date: Fri Feb 13 2015 - 10:48:10 EST
On Fri, 13 Feb 2015, Joonsoo Kim wrote:
>
> I also think that this implementation is slub-specific. For example,
> in slab case, it is always better to access local cpu cache first than
> page allocator since slab doesn't use list to manage free objects and
> there is no cache line overhead like as slub. I think that,
> in kmem_cache_alloc_array(), just call to allocator-defined
> __kmem_cache_alloc_array() is better approach.
What do you mean by "better"? Please be specific as to where you would see
a difference. And slab definititely manages free objects although
differently than slub. SLAB manages per cpu (local) objects, per node
partial lists etc. Same as SLUB. The cache line overhead is there but no
that big a difference in terms of choosing objects to get first.
For a large allocation it is beneficial for both allocators to fist reduce
the list of partial allocated slab pages on a node.
Going to the local objects first is enticing since these are cache hot but
there are only a limited number of these available and there are issues
with acquiring a large number of objects. For SLAB the objects dispersed
and not spatially local. For SLUB the number of objects is usually much
more limited than SLAB (but that is configurable these days via the cpu
partial pages). SLUB allocates spatially local objects from one page
before moving to the other. This is an advantage. However, it has to
traverse a linked list instead of an array (SLAB).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/