Re: [UnifiedV4 00/16] The Unified slab allocator (V4)

From: Christoph Lameter
Date: Mon Oct 18 2010 - 14:00:55 EST


On Wed, 13 Oct 2010, Alex,Shi wrote:

> I got the code from
> git://git.kernel.org/pub/scm/linux/kernel/git/christoph/slab.git unified
> on branch "origin/unified" and do a patch base on 36-rc7 kernel. Then I
> tested the patch on our 2P/4P core2 machines and 2P NHM, 2P WSM
> machines. Most of benchmark have no clear improvement or regression. The
> testing benchmarks is listed here.
> http://kernel-perf.sourceforge.net/about_tests.php

Ah. Thanks. The tests needs to show a clear benefit for this to be a
viable solution. They did earlier without all the NUMA queuing on SMP.

> BTW, I save several time kernel panic in fio testing:
> ===================
> > Pid: 776, comm: kswapd0 Not tainted 2.6.36-rc7-unified #1 X8DTN/X8DTN
> > > RIP: 0010:[<ffffffff810cc21c>] [<ffffffff810cc21c>] slab_alloc
> > > +0x562/0x6f2

Cannot see the error message? I guess this is the result of a BUG_ON()?
I'll try to run that fio test first.

> kswapd0: page allocation failure. order:0, mode:0xd0
> Pid: 714, comm: kswapd0 Not tainted 2.6.36-rc7-unified #1
> Call Trace:
> [<ffffffff8109fcf4>] ? __alloc_pages_nodemask+0x63f/0x6c7
> [<ffffffff8100328e>] ? apic_timer_interrupt+0xe/0x20
> [<ffffffff810cc6f7>] ? new_slab+0xac/0x277
> [<ffffffff810cce1e>] ? slab_alloc+0x55c/0x6e8
> [<ffffffff810ce58b>] ? shared_caches+0x31/0xd9
> [<ffffffff810ce110>] ? __kmalloc+0xb0/0xff
> [<ffffffff810ce58b>] ? shared_caches+0x31/0xd9
> [<ffffffff810ce649>] ? expire_alien_caches+0x16/0x8d
> [<ffffffff810cde25>] ? kmem_cache_expire_all+0xf6/0x14d

Expiration needs to get the gfp flags from the reclaim context. And we
now have more allocations in a reclaim context.

> slab_unreclaimable:2963060kB kernel_stack:1016kB pagetables:656kB

3GB unreclaimable.... Memory leak.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/