Re: [PATCH 0/4] pending scheduler updates
From: Mike Galbraith
Date: Wed Oct 22 2008 - 08:38:19 EST
On Wed, 2008-10-22 at 14:10 +0200, Ingo Molnar wrote:
> * Mike Galbraith <efault@xxxxxx> wrote:
>
> > On Wed, 2008-10-22 at 12:03 +0200, Mike Galbraith wrote:
> >
> > > It has positive effects too, but IMHO, the bad outweigh the good.
> >
> > BTW, most dramatic on the other end of the spectrum is pgsql+oltp.
> > With preemption as is, it collapses as load climbs to heavy with
> > preemption knobs at stock. Postgres uses user-land spinlocks and
> > _appears_ to wake others while these are still held. For this load,
> > there is such a thing as too much short-term fairness, preempting lock
> > holder creates nasty gaggle of contended lock spinners. It's curable
> > with knobs, and I think it's postgres's own fault, but may be wrong.
> >
> > With that patch, pgsql+oltp scales perfectly.
>
> hm, tempting.
I disagree. Postgres's scaling problem is trivially corrected by
twiddling knobs (or whatnot). With that patch, you can't twiddle mysql
throughput back, or disk intensive loads for that matter. You can tweak
the preempt number, but it has nothing to do with lag, so anybody can
preempt anybody else as you turn the knob toward zero. Chaos.
> Have you tried to hack/fix pgsql to do proper wakeups?
No, I tried to build without spinlocks to verify, but build croaked.
Never went back to slogging through the code.
> Right now pgsql it punishes schedulers that preempt it while it is
> holding totally undeclared (to the kernel) user-space spinlocks ...
>
> Hence postgresql is rewarding a _bad_ scheduler policy in essence. And
> pgsql scalability seems to fall totally apart above 16 cpus - regardless
> of scheduler policy.
If someone gives me that problem, and a credit card for electric
company, I'll do my very extra special best to defeat it ;-)
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/