Re: [PATCH] slub: Hold list_lock unconditionally before the call toadd_full.

From: David Rientjes
Date: Fri Feb 07 2014 - 15:46:31 EST


On Sat, 8 Feb 2014, Gautham R Shenoy wrote:

> Hi,
>
> From the lockdep annotation and the comment that existed before the
> lockdep annotations were introduced,
> mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> held.
>
> However, there's a call path in deactivate_slab() when
>
> (new.inuse || n->nr_partial <= s->min_partial) &&
> !(new.freelist) &&
> !(kmem_cache_debug(s))
>
> which ends up calling add_full() without holding
> n->list_lock.
>
> This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> the lockdep annotations added by commit
> c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
>
> Fix this by unconditionally taking the lock
> irrespective of the state of kmem_cache_debug(s).
>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Pekka Enberg <penberg@xxxxxxxxxx>
> Signed-off-by: Gautham R. Shenoy <ego@xxxxxxxxxxxxxxxxxx>

No, it's not needed unless kmem_cache_debug(s) is actually set,
specifically s->flags & SLAB_STORE_USER.

You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693
instead which is already in -mm and linux-next.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/