Re: [PATCH 1/2] slub: fix leakage

From: Christoph Lameter
Date: Sat Nov 03 2007 - 14:51:43 EST


On Sat, 3 Nov 2007, Hugh Dickins wrote:

> Which fixes the leakage: Objects and Partials then remain stable.

Well this code is just an optimization for a rare case. Your patch may not
handle the debug situation the right way. We could just remove it.



SLUB: Fix memory leak by not reusing cpu_slab

Fix the memory leak that may occur when we attempt to reuse a cpu_slab
that was allocated while we reenabled interrupts in order to be able to
grow a slab cache. The per cpu freelist may contain objects and in that
situation we may overwrite the per cpu freelist pointer loosing objects.
This only occurs if we find that the concurrently allocated slab fits
our allocation needs.

If we simply always deactivate the slab then the freelist will be properly
reintegrated and the memory leak will go away.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>

---
mm/slub.c | 20 +-------------------
1 file changed, 1 insertion(+), 19 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2007-11-03 11:41:58.000000000 -0700
+++ linux-2.6/mm/slub.c 2007-11-03 11:42:12.000000000 -0700
@@ -1511,26 +1511,8 @@ new_slab:

if (new) {
c = get_cpu_slab(s, smp_processor_id());
- if (c->page) {
- /*
- * Someone else populated the cpu_slab while we
- * enabled interrupts, or we have gotten scheduled
- * on another cpu. The page may not be on the
- * requested node even if __GFP_THISNODE was
- * specified. So we need to recheck.
- */
- if (node_match(c, node)) {
- /*
- * Current cpuslab is acceptable and we
- * want the current one since its cache hot
- */
- discard_slab(s, new);
- slab_lock(c->page);
- goto load_freelist;
- }
- /* New slab does not fit our expectations */
+ if (c->page)
flush_slab(s, c);
- }
slab_lock(new);
SetSlabFrozen(new);
c->page = new;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/