> My kernel relevant (or isn't it?) question is:
> UNIX and therefore LINUX are multitasking
> operation systems. For me it means:
> >From the sight of the processor, the
> code to be excuted is switched many
> times, so that the contents of the
> L1/L2 and processor pipes are becoming
> invalid very often and are therefore
> the caches are"""needless""".
> ^^^ ^^^---<<<watch this!
>
> What is worth more:
> To have a board, which has 512KByte
> 2nd-level cache but "only" 835MByte/s
> throughput to the L1-cache _OR_ using
> a board, which has 876MByte/s throughput
> (L1-Cache) but only 256KByte L2-cache.
I'm not quite understanding this. Are you talking about
the L1-CPU bandwidth (which should be independent of the
board), the L2-L1 bandwidth, or the L2/main RAM - L1 band-
width (which will vary based on workload, cache size, and
main RAM speed; the test isn't quite right if it was tested
with EDO RAM or SDRAM and you're planning on using your
old FPM SIMMs that might not even be 60 ns)?
> (The tests and benchmarks I know are based on
> tests with that nasty Windoze-stuff, so
> only the physical measurable values
> as those above are relevant to linux,
> I think...)
>
> I have no idea, what the relation is
> between the timings of the context switches
> and the scheduling cycles and the times
> in which L1/L2-caches are used.
I would say get as big a L2 cache as possible,
esp. when you're using old RAM that may be two
grades (70 ns vs. 60 ns, FPM vs. EDO) slower than
what I'd recommend buying new. The far side of
a context switch still being in cache by the time
you switch back to it is a big win if you are
CPU bound (and pointless if you aren't), and the
extra L2 cache should mask the old slow DRAM a
bit more.
I would also suggest looking to see if you can't
get the "faster" board with 512K cache anyway, and
if you plan on overclocking the bus, look at the
corresponding numbers on at least an advisory basis.
Keith