Re: SSI clusters on NUMA (was Re: Scaling noise)

From: Martin J. Bligh
Date: Wed Sep 03 2003 - 23:48:56 EST


>> > how much of this need could be met with a native linux master and kernels
>> > running user-mode kernels? (your resource sharing would obviously not be
>> > that clean, but you could develop the tools to work across the kernel
>> > images this way)
>>
>> I talked to Jeff and Andrea about this at KS & OLS this year ... the feeling
>> was that UML was too much overhead, but there were various ways to reduce
>> that, especially if the underlying OS had UML support (doesn't require it
>> right now).
>>
>> I'd really like to see the performance proved to be better before basing
>> a design on UML, though that was my first instinct of how to do it ...
>
> I agree that UML won't be able to show the performance advantages (the
> fact that the UML kernel can't control the cache footprint on the CPU's
> becouse it gets swapped from one to another at the host OS's convienience
> is just one issue here)
>
> however with UML you should be able to develop the tools and features to
> start to weld the two different kernels into a single logical image. once
> people have a handle on how these tools work you can then try them on some
> hardware that has a lower level partitioning setup (i.e. the IBM
> mainframes) and do real speed comparisons between one kernel that's given
> X CPU's and Y memory and two kernels that are each given X/2 CPU's and Y/2
> memory.
>
> the fact that common hardware doesn't nicly support the partitioning
> shouldn't stop people from solving the other problems.

Yeah, it's definitely an interesting development environment at least.
FYI, most of the discussions in Ottowa centered around system call
overhead (4 TLB flushes per, IIRC), but the cache footprint is interesting
too ... with the O(1) sched in the underlying OS, it shouldn't flip-flop
around too easily, but interesting, nonetheless.

M.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/