Re: multicast performance
From: Matt Garman
Date: Tue Feb 09 2010 - 12:43:48 EST
On Fri, Nov 06, 2009 at 09:46:51AM +0100, Joachim Worringen wrote:
> On Thu, 2009-11-05 at 13:46 -0800, Greg Lindahl wrote:
> > On Thu, Nov 05, 2009 at 03:29:34PM -0600, Matt Garman wrote:
> > > Also, right now, we are more concerned with latency than
> > > throughput. Our total traffic is less than what the hardware
> > > can handle from a throughput perspective, but we are more
> > > senstive to latency issues.
> > Sounds like you ought to be using cpu affinities to give the
> > receiver/stamper/queuer process its own dedicated core.
> I enforce this statement: you *have* to bind each thread to its
> own CPU - Linux thread scheduling on standard kernels is highly
> non-deterministic from my experience. Otherwise, even with
> sufficient cores available, not all runnable threads of a process
> will be executed concurrently.
For what it's worth:
$ uname -a
Linux lnxsvr38 2.6.9-78.ELlargesmp #1 SMP Fri Jul 25 00:02:41 EDT
2008 x86_64 x86_64 x86_64 GNU/Linux
$ dmesg | grep sched
Using cfq io scheduler
So, we pared down the processes down to just one that sends. And
there is only one sending thread. However, the process does have
other processing threads. So you're suggesting that, even though we
have significantly more empty cores than running threads, the sender
program could effectively be in "single core" mode, and sleeping the
send thread while it runs the other threads... right?
How does one get metrics on this? I.e., how many times, and for how
long the sending thread is subjected to a context switch?
> Also, I'd be concerned about clock-synchronization issues when you
> are trying to measure in the low micro-second range (which I
> assume you are).
We're not quite at the usec range---we're in the 10ths of
milliseconds range right now (i.e. 1/10,000 of a sec). Nonetheless,
we'll definitely consider this as we continue to investigate the
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html