Re: [RFC] Simple Slab: A slab allocator with minimal metainformation

From: KAMEZAWA Hiroyuki
Date: Thu Aug 10 2006 - 01:40:40 EST

On Wed, 9 Aug 2006 22:13:27 -0700 (PDT)
Christoph Lameter <clameter@xxxxxxx> wrote:

> On Thu, 10 Aug 2006, KAMEZAWA Hiroyuki wrote:
> > > There is no freelist for slabs. slabs are immediately returned to the page
> > > allocator. The page allocator has its own per cpu page queues that should provide
> > > enough caching.
> > >
> >
> > I think that the advantage of Slab allocator is
> > - object is already initizalized at setup, so you don't have to initialize it again at
> > allocation.
> > - object is initialized only once when slab is created.
> If you do that then you loose the cache hot advantage. It is advantageous
> to initialize the object and then immediately use it. If you initialize it
> before then the cacheline will be evicted and then brought back.

hmm, I don't know precise analization of the perfromance benefit of slab on
current (Linux + Fast CPU/Bus + Large Cache) systems. I'm grad if you show the
performance of new "Simple Slab" next time.

> > If a slab page is returned to page allocator ASAP, # of object initilization may
> > increase.
> The initializers in the existing slab are only used in rare cases in the
> kernel.

Because of inode_init_once, many codes which uses inode uses initilization code.
And inode is one of heavy users of slab.

[kamezawa@aworks linux-2.6.18-rc4]$ grep init_once /proc/kallsyms
c0155cdf t init_once
c0163e00 t init_once
c016ef02 t init_once
c0172a01 T inode_init_once
c0172b77 t init_once
c0187704 t init_once
c019a19f t init_once
c01aa643 t init_once
c01abd3f t init_once
c01acdab t init_once
c01b2500 t init_once
c01d3139 t init_once
c01db8b3 t init_once
c02f9a62 t init_once
c0358ab5 t init_once
c03b9f6c r __ksymtab_inode_init_once
c03c28fe r __kstrtab_inode_init_once

performance measurement related to inode will show good data. maybe

- Kame

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at