[patch 06/13] SLUB: Fix memory leak by not reusing cpu_slab

From: Greg KH
Date: Thu Nov 15 2007 - 01:12:39 EST


-stable review patch. If anyone has any objections, please let us know.

------------------
From: Christoph Lameter <clameter@xxxxxxx>

patch 05aa345034de6ae9c77fb93f6a796013641d57d5 in mainline.

SLUB: Fix memory leak by not reusing cpu_slab

Fix the memory leak that may occur when we attempt to reuse a cpu_slab
that was allocated while we reenabled interrupts in order to be able to
grow a slab cache. The per cpu freelist may contain objects and in that
situation we may overwrite the per cpu freelist pointer loosing objects.
This only occurs if we find that the concurrently allocated slab fits
our allocation needs.

If we simply always deactivate the slab then the freelist will be properly
reintegrated and the memory leak will go away.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Cc: Hugh Dickins <hugh@xxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxx>

---
mm/slub.c | 22 +---------------------
1 file changed, 1 insertion(+), 21 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1501,28 +1501,8 @@ new_slab:
page = new_slab(s, gfpflags, node);
if (page) {
cpu = smp_processor_id();
- if (s->cpu_slab[cpu]) {
- /*
- * Someone else populated the cpu_slab while we
- * enabled interrupts, or we have gotten scheduled
- * on another cpu. The page may not be on the
- * requested node even if __GFP_THISNODE was
- * specified. So we need to recheck.
- */
- if (node == -1 ||
- page_to_nid(s->cpu_slab[cpu]) == node) {
- /*
- * Current cpuslab is acceptable and we
- * want the current one since its cache hot
- */
- discard_slab(s, page);
- page = s->cpu_slab[cpu];
- slab_lock(page);
- goto load_freelist;
- }
- /* New slab does not fit our expectations */
+ if (s->cpu_slab[cpu])
flush_slab(s, s->cpu_slab[cpu], cpu);
- }
slab_lock(page);
SetSlabFrozen(page);
s->cpu_slab[cpu] = page;

--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/