Re: [PATCH] slub: try to get cpu partial slab even if we get enoughobjects for cpu freelist
From: JoonSoo Kim
Date: Fri Aug 17 2012 - 09:34:20 EST
2012/8/17 Christoph Lameter <cl@xxxxxxxxx>:
> On Thu, 16 Aug 2012, JoonSoo Kim wrote:
>
>> But, if you prefer that s->cpu_partial is for both cpu slab and cpu
>> partial slab,
>> get_partial_node() needs an another minor fix.
>> We should add number of objects in cpu slab when we refill cpu partial slab.
>> Following is my suggestion.
>>
>> @@ -1546,7 +1546,7 @@ static void *get_partial_node(struct kmem_cache *s,
>> spin_lock(&n->list_lock);
>> list_for_each_entry_safe(page, page2, &n->partial, lru) {
>> void *t = acquire_slab(s, n, page, object == NULL);
>> - int available;
>> + int available, nr = 0;
>>
>> if (!t)
>> break;
>> @@ -1557,10 +1557,10 @@ static void *get_partial_node(struct kmem_cache *s,
>> object = t;
>> available = page->objects - page->inuse;
>> } else {
>> - available = put_cpu_partial(s, page, 0);
>> + nr = put_cpu_partial(s, page, 0);
>> stat(s, CPU_PARTIAL_NODE);
>> }
>> - if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
>> + if (kmem_cache_debug(s) || (available + nr) >
>> s->cpu_partial / 2)
>> break;
>>
>> }
>>
>> If you agree with this suggestion, I send a patch for this.
>
> What difference does this patch make? At the end of the day you need the
> total number of objects available in the partial slabs and the cpu slab
> for comparison.
It doesn't induce any large difference, but this makes code robust and
consistent.
Consistent code make us easily knowing what code does.
It is somewhat odd that in first loop, we consider number of objects
kept in cpu slab,
but second loop exclude that number and just consider number of
objects in cpu partial slab.
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/