Re: Query

From: Alan Cox
Date: Wed Mar 16 2011 - 07:54:43 EST


> Me and my friends are working on a new concept.

It's not really new. Various systems have done this historically, and
folks including Larry McVoy have proposed that for large scalability you
might build a system out of multiple separate kernels one on each NUMA
node and which had interfaces to loan or share pages with one another by
bumping page counts and handling coherency.

Cool to see someone trying some of this in Linux

> Our implementation is on Intel core 2 duo machine. So far our
> implementation includes running two kernels simultaneously (one on
> each core) , handling hard-disk on one core and ethernet on another
> core so as to divide the network and disk subsystem.
>
> But here we are unable to measure the performance. Can u please
> suggest any method to measure the performance in terms of throughput
> and response time?

There are a bunch of standard benchmarks you can use. A lot of the big
name ones need clusters of systems to do the loading but there are things
like dbench that are quite useful on single systems.

For some of the applications you are talking about I think dbench might
be a good start.


Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/