On Fri, Oct 5, 2012 at 8:54 AM, Nathan Zimmer <nzimmer@xxxxxxx> wrote:On 10/05/2012 09:14 AM, Joe Perches wrote:Many vendors don't expose host bridges that lead to the CPU-relatedOn Fri, 2012-10-05 at 08:55 -0500, Nathan Zimmer wrote:On 10/04/2012 11:37 AM, Joe Perches wrote:Right.On Thu, 2012-10-04 at 11:02 -0500, Nathan Zimmer wrote:Actually I would think that argues for reducing boot times on theseAt many of our customer sites the log level is set to KERN_DEBUG. ItThat argues for adding some KERN_INFO "still booting" messages
helps avoid reboots due to operator impatience. Machines this large
take significantly longer then typical to boot and seeing the extra
messages reassures them that the kernel isn't hung.
not logging unnecessary KERN_DEBUG messages.
large systems.
That's an independent argument, but sure, go ahead
and do that too.
Here is output for my workstation a simple 4x box
-bash-4.1$ dmesg | grep "type [0-9][0-9] class" | wc
12 108 804
-bash-4.1$ dmesg | wc
744 6359 49474
Here is some output from one of the biggest boxes.
-bash-4.1$ dmesg | wc
26503 235414 1811651
-bash-4.1$ dmesg | grep "type [0-9][0-9] class" | wc
12085 108765 821780
PCI devices because they don't want the OS to muck with them. We
currently blindly probe for these in domain 0, so we find them anyway
(I think we should change this behavior).
I'd guess that having all these CPU-related devices around also really
clutters up "lspci" output, and of course, consumes memory for all the
pci_dev structs in the kernel. It takes some time to enumerate them
all, so avoiding that would speed up boot somewhat.
So I wonder if it might be more useful to figure out how to avoid
enumerating those devices in the first place? The first step would be
to stop exposing PNP0A03/PNP0A08 host bridges that lead to them. As I
mentioned, we currently will probably find them anyway via blind
probing. You might be able to avoid that if you could place them in a
PCI domain other than 0.
Bjorn