Re: CONFIG_PREEMPT and server workloads
From: Andrew Morton
Date: Thu Mar 18 2004 - 04:55:05 EST
Andrea Arcangeli <andrea@xxxxxxx> wrote:
>
> On Thu, Mar 18, 2004 at 05:00:01AM +0100, Marinos J. Yannikos wrote:
> > Hi,
> >
> > we upgraded a few production boxes from 2.4.x to 2.6.4 recently and the
> > default .config setting was CONFIG_PREEMPT=y. To get straight to the
> > point: according to our measurements, this results in severe performance
> > degradation with our typical and some artificial workload. By "severe" I
> > mean this:
>
> this is expected (see the below email, I predicted it on Mar 2000),
Incorrectly.
> keep preempt turned off always, it's useless.
Preempt is overrated. The infrastructure which it has introduced has been
useful for detecting locking bugs.
It has been demonstrated that preempt improves the average latency. But
not worst-case, because those paths tend to be under spinlock.
> Worst of all we're now taking spinlocks earlier than needed,
Where? CPU scheduler?
> and the preempt_count stuff isn't optmized away by PREEMPT=n,
It should be. If you see somewhere where it isn't, please tell us.
We unconditionally bump the preempt_count in kmap_atomic() so that we can
use atomic kmaps in read() and write(). This is why four concurrent
write(fd, 1, buf) processes on 4-way is 8x faster than on 2.4 kernels.
> preempt just wastes cpu with tons of branches in fast paths that should
> take one cycle instead.
I don't recall anyone demonstrating even a 1% impact from preemption. If
preemption was really causing slowdowns of this magnitude it would of
course have been noticed. Something strange has happened here and more
investigation is needed.
> ...
> I still think after 4 years that such idea is more appealing then
> preempt, and numbers start to prove me right.
The overhead of CONFIG_PREEMPT is quite modest. Measuring that is simple.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/