Re: [RFC PATCH 0/6] module, kbuild: Faster boot with custom kernel.

From: Kay Sievers
Date: Thu Feb 19 2009 - 20:34:16 EST


On Fri, Feb 20, 2009 at 01:58, Rusty Russell <rusty@xxxxxxxxxxxxxxx> wrote:
> On Friday 20 February 2009 08:29:48 Kay Sievers wrote:
>> Further testing revealed, if I only comment out the stop_machine()
>> preparation, which is used in an error case, I get almost the same
>> improvement, even with the original mutex in place. Without the mutex
>> it's still a bit better, maybe it would be much better if we have more
>> CPUs, but all the long delays are gone only with removing the
>> stop_machine() preparation.
>
> Hmm, interesting. The reason that reducing the lock coverage had this effect
> is because stop_machine_create() just bumps a refcount if someone is already
> between ...create() and ...destroy().
>
> So, now we've found the problem, let's fix it, then re-visit mutex reduction.
>
> module: don't use stop_machine on module load
>
> Kay Sievers <kay.sievers@xxxxxxxx> discovered that boot times are slowed
> by about half a second because all the stop_machine_create() calls,
> and he only probes about 40 modules (I have 125 loaded on this laptop).
>
> We only do stop_machine_create() so we can unlink the module if
> something goes wrong, but it's overkill (and buggy anyway: if
> stop_machine_create() fails we still call stop_machine_destroy()).

Sounds good. With that, no module takes more than 40 millisecs to link
now, most of them are between 3 and 8 millisecs.

Coldplug loads 39 modules, I end up having 50 loaded, but they are
after the udev coldplug settle time. The 39 modules get linked into
the kernel in 281 millisecs, which sounds pretty good.

That looks very different to the numbers without this patch and the
otherwise same setup, where we get heavy noise in the traces and many
delays of up to 200 millisecs until linking, most of them taking 30+
millisecs.

Thanks,
Kay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/