Re: 2.6.8.1-mm3

From: William Lee Irwin III
Date: Fri Aug 20 2004 - 19:06:02 EST


At some point in the past, I wrote:
>> Parallel compilation is an extremely poor benchmark in general, as the
>> workload is incapable of being effectively scaled to system sizes, the
>> linking phase is inherently unparallelizable and the compilation phase
>> too parallelizable to actually stress anything. There is also precisely
>> zero relevance the benchmark has to anything real users would do.

On Sat, Aug 21, 2004 at 09:31:26AM +1000, Anton Blanchard wrote:
> I spend most of my life compiling kernels and I consider myself a real
> user :)

Kernel hacking is not an end in itself, regardless of the fact there
are some, such as myself, who use computers for no other purpose. A
real user generally has some purpose to their activity beyond working
on the software or hardware they are "using". e.g. various real users
use their systems for entertainment: playing games, music, and movies.
Others may use their systems to make money somehow, e.g. archiving
information about customers so they can look up what they've bought
and paid for or have yet to pay for.

Regardless of the social issue, the rather serious technical deficits
of compilation of any software as a benchmark are showstopping issues.
Frankly, even the issues I've dredged up are nowhere near comprehensive.
There are further issues such as that stable (i.e. not varying across
the benchmarks being done on various systems at various times) versions
of the software being compiled and the toolchain being used to compile
it are lacking as components of any "kernel compile benchmarking suite"
and worse still the variance in target architecture of the toolchain
also defeats any attempt at meaningful benchmarking.

If you're truly concerned about compilation speed, userspace is going
to be the most productive area to work on anyway, as the vast majority
of time during compilation is spent in userspace. AIUI the userspace
algorithms in gcc are not particularly cognizant of cache locality and
in various instances have suboptimal time and space behavior, so it's
not as if there isn't work to be done there. Improving the compactness
and cache locality of data structures is important in userspace also,
and most (perhaps all) userspace programs are grossly ignorant of this.
FWIW, there are notable kernel hackers known to use very downrev gcc
versions due to regressions in compilation speed in subsequent versions,
so there are already large known differences in compilation speed that
can be obtained just by choosing a different compiler version.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/