Re: [PATCH] local_irq_disable removal
From: Thomas Gleixner
Date: Sat Jun 11 2005 - 12:36:00 EST
On Sat, 2005-06-11 at 09:36 -0700, Daniel Walker wrote:
>
> On Sat, 11 Jun 2005, Esben Nielsen wrote:
>
> > For me it is perfectly ok if RCU code, buffer caches etc use
> > raw_local_irq_disable(). I consider that code to be "core" code.
>
> This distinction seem completly baseless to me. Core code doesn't
> carry any weight . The question is , can the code be called from real
> interrupt context ? If not then don't protect it.
>
> >
> > The current soft-irq states only gives us better hard-irq latency but
> > nothing else. I think the overhead runtime and the complication of the
> > code is way too big for gaining only that.
>
> Interrupt response is massive, check the adeos vs. RT numbers . They did
> one test which was just interrupt latency.
Performance on RT systems is more than IRQ latencies.
The wide spread misbelief that
"Realtime == As fast as possible"
seems to be still stuck in peoples mind.
"Realtime == As fast as specified"
is the correct equation.
There is always a tradeoff between interrupt latencies and other
performance values, as you have to invent new mechanisms to protect
critical sections. In the end, they can be less effective than the gain
on irq latencies.
While working on high resolution timers on top of RT, I can prove that
changing a couple of shortheld spinlocks into raw locks (with hardirq
disable), results in 50-80% latency improvement of the scheduled tasks
but increases the interrupt latency by only 5-10%. The differrent
numbers are related to different CPUs (x86/PPC,ARM).
tglx
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/