Re: PREEMPT_RT vs I-PIPE: the numbers, part 2

From: Karim Yaghmour
Date: Wed Jun 22 2005 - 20:59:11 EST



Ingo Molnar wrote:
> if anything i wrote offended you i'd like to apologize for it. I feel
> pretty strongly about the stuff i do, but i always regret 99.9% of the
> flames in the next morning :) Also, i only realized when reading your
> reply that you took my "vendor sponsored benchmarking" remark literally
> (and that's my bad too). I never thought of you as a 'vendor' or having
> any commercial interest in this benchmarking - it was just a stupid
> analogy from me. I should have written "supporter driven benchmarking"
> or so - that would still have been a pretty nice flame ;)

Thanks for the clarification, sorry if I jumped to the wrong conclusions
regarding what you meant to say.

> also please consider the other side of the coin. You posted numbers that
> initially put PREEMPT_RT in a pretty bad light. Those numbers are still
> being linked to from your website, without any indication to suggest
> that they are incorrect. Even in your above paragraph you are not
> talking about flawed numbers, you are talking about 'changing the tests
> to fit my requirements'. Heck i have no 'requirements' other than to see
> fair numbers. And if adeos/ipipe happens to beat PREEMPT_RT in a fair
> irq latency test you wont hear a complaint from me. (you might see a
> patch quite soon though ;)

OK, please recheck the webpage, I've now added a warning specifically
to the effect that the numbers need to be rerun. Hopefully this clears
things up.

> And i know what irq latencies to expect from PREEMPT_RT. It takes me 5
> minutes to do a 10 million samples irq test using LPPTEST, the histogram
> takes only 200 bytes on the screen, and the numbers i'm getting differ
> from your numbers - but obviously i cannot run it on your hardware. The
> rtc_wakeup and built-in latency-tracer numbers differ too. They could be
> all wrong though, so i'm curious what your independent method will
> yield.

Well the method we're using is certainly not absolute. That's why we're
providing the scripts. There's no saying that others (outside yourself
and us) will find some other outlandish results. But hopefully the more
we study these things, the more consistent we can characterize them.

> your lmbench results look accurate and fair, the slowdown during
> irq-load is a known and expected effect of IRQ threading. If you flood
> ping a box and generate context-switches instead of plain interrupts,
> there will be noticeable overhead. I checked some of the lmbench numbers
> today on my testbox, and while there's overhead, it's significantly less
> than the 90% degradation you were seeing. That's why i suggested to you
> to retest using the current base - but you of course dont 'have to'.
> There were a number of bugs fixed in the past few dozen iterations of
> patches that affected various components of lmbench :)

I certainly welcome this. Thanks for partly confirming our results,
and pointing out that newever versions are better. Like I said
earlier, we're bound to repeat our tests with all that's been
suggested now ... So we will redo with the version you mentioned
earlier. Hopefully this time we'll get it right.

Karim
--
Author, Speaker, Developer, Consultant
Pushing Embedded and Real-Time Linux Systems Beyond the Limits
http://www.opersys.com || karim@xxxxxxxxxxx || 1-866-677-4546
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/