Re: [patch 2/3] scheduler: add full memory barriers upon taskswitch at runqueue lock/unlock

From: Peter Zijlstra
Date: Mon Feb 01 2010 - 05:36:45 EST


On Mon, 2010-02-01 at 21:11 +1100, Nick Piggin wrote:
> On Mon, Feb 01, 2010 at 10:42:30AM +0100, Peter Zijlstra wrote:
> > On Mon, 2010-02-01 at 18:33 +1100, Nick Piggin wrote:
> > > > Adds no overhead on x86, because LOCK-prefixed atomic operations of the spin
> > > > lock/unlock already imply a full memory barrier. Combines the spin lock
> > > > acquire/release barriers with the full memory barrier to diminish the
> > > > performance impact on other architectures. (per-architecture spinlock-mb.h
> > > > should be gradually implemented to replace the generic version)
> > >
> > > It does add overhead on x86, as well as most other architectures.
> > >
> > > This really seems like the wrong optimisation to make, especially
> > > given that there's not likely to be much using librcu yet, right?
> > >
> > > I'd go with the simpler and safer version of sys_membarrier that does
> > > not do tricky synchronisation or add overhead to the ctxsw fastpath.
> > > Then if you see some actual improvement in a real program using librcu
> > > one day we can discuss making it faster.
> > >
> > > As it is right now, the change will definitely slow down everybody
> > > not using librcu (ie. nearly everything).
> >
> > Right, so the problem with the 'slow'/'safe' version is that it takes
> > rq->lock for all relevant rqs. This renders while (1) sys_membarrier()
> > in a quite effective DoS.
>
> All, but one at a time, no? How much of a DoS really is taking these
> locks for a handful of cycles each, per syscall?

I was more worrying about the cacheline trashing than lock hold times
there.

> I mean, we have LOTS of syscalls that take locks, and for a lot longer,
> (look at dcache_lock).

Yeah, and dcache is a massive pain, isn't it ;-)

> I think we basically just have to say that locking primitives should be
> somewhat fair, and not be held for too long, it should more or less
> work.

Sure, it'll more of less work, but he's basically making rq->lock a
global lock instead of a per-cpu lock.

> If the locks are getting contended, then the threads calling
> sys_membarrier are going to be spinning longer too, using more CPU time,
> and will get scheduled away...

Sure, and increased spinning reduces the total throughput.

> If there is some particular problem on -rt because of the rq locks,
> then I guess you could consider whether to add more overhead to your
> ctxsw path to reduce the problem, or simply not support sys_membarrier
> for unprived users in the first place.

Right, for -rt we might need to do that, but its just that rq->lock is a
very hot lock, and adding basically unlimited trashing to it didn't seem
like a good idea.

Also, I'm thinking making it a priv syscall basically renders it useless
for Mathieu.

Anyway, it might be I'm just paranoid... but archs with large core count
and lazy tlb flush seem particularly vulnerable.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/