Re: [GIT PULL] scheduler fixes

From: Pekka Enberg
Date: Sun May 24 2009 - 15:14:15 EST

Hi Linus,

On Sun, 24 May 2009, Pekka J Enberg wrote:
>> Ingo, here's a patch that boots UMA+SMP+SLUB x86-64 kernel on qemu all
>> the way to userspace. It probably breaks bunch of things for now but
>> something for you to play with if you want.

On Sun, May 24, 2009 at 9:18 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> In fact, it would be nice to perhaps try to move it even earlier. Now you
> moved it to before the scheduler init (good!), but I do wonder if it could
> be moved up to even before the setup_per_cpu_areas() etc crud.

Oh, sure, we can look into that. I just wanted to take the
conservative approach because I worry about breaking bunch of
configurations I cannot test. I suspect it's going to get pretty hairy
if we do kmem_cache_init() even earlier. Furthermore, SLUB does sysfs
setup in kmem_cache_init() so we probably need to split slab
initialization in two stages.

On Sun, May 24, 2009 at 9:18 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> I realize that the allocator wants to use the per-CPU area, but if we have
> just the boot CPU area set up statically at that point, since it's only
> the boot CPU running, maybe we could do those per-cpu area allocations
> without the bootmem allocator too?

We probably can. I don't see any fundamental reason why slab
allocators can't bootstrap early in the boot sequence after we've set
up the page allocator.

On Sun, May 24, 2009 at 9:18 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> But even just getting bootmem out of the scheduler setup is a big
> improvement, I think. So this patch looks very promising as is.
> Did you test whether the other allocators were ok with this too?

SLUB and SLOB are fine but SLAB explodes. I didn't investigate it yet
but it's probably because SLAB expects interrupts to be enabled when
kmem_cache_init() is called.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at