Re: [PATCH 3/7] barrier: a scalable synchonisation barrier
From: Paul E. McKenney
Date: Fri Feb 02 2007 - 12:13:31 EST
On Fri, Feb 02, 2007 at 02:56:12PM +0300, Oleg Nesterov wrote:
> On 02/01, Paul E. McKenney wrote:
> > > >
> > > > void synchronize_qrcu(struct qrcu_struct *qp)
> > > > {
> > > > int idx;
> > > >
> > > > smp_mb();
> > > >
> > > > if (atomic_read(qp->ctr[0]) + atomic_read(qp->ctr[1]) <= 1) {
> > > > smp_rmb();
> > > > if (atomic_read(qp->ctr[0]) +
> > > > atomic_read(qp->ctr[1]) <= 1)
> > > > goto out;
> > > > }
> > > >
> > > > mutex_lock(&qp->mutex);
> > > > idx = qp->completed & 0x1;
> > > > atomic_inc(qp->ctr + (idx ^ 0x1));
> > > > /* Reduce the likelihood that qrcu_read_lock() will loop */
> > > > smp_mb__after_atomic_inc();
> > > > qp->completed++;
> > > >
> > > > atomic_dec(qp->ctr + idx);
> > > > __wait_event(qp->wq, !atomic_read(qp->ctr + idx));
> > > > mutex_unlock(&qp->mutex);
> > > > out:
> > > > smp_mb();
> > > > }
> > > >
> > > > For the first "if" to give a false positive, a concurrent switch had
> > > > to have happened. For example, qp->ctr[0] was zero and qp->ctr[1]
> > > > was two at the time of the first atomic_read(), but then qp->completed
> > > > switched so that both qp->ctr[0] and qp->ctr[1] were one at the time
> > > > of the second atomic_read. The only way the second "if" can give us a
> > > > false positive is if there was another change to qp->completed in the
> > > > meantime -- but that means that all of the pre-existing qrcu_read_lock()
> > > > holders must have gotten done, otherwise the second switch could not
> > > > have happened. Yes, you do incur three memory barriers on the fast
> > > > path, but the best you could hope for with your approach was two of them
> > > > (unless I am confused about how you were using barrier_sync()).
>
> Yes. Without synchronize_qrcu() in between, one of the counters should be == 0,
> another >= 1. == 1 means we have no active readers. So the false positive really
> means a concurrent switch. And we can check twice - excellent idea!
Well, if it ends up really working. ;-)
This one needs more than just testing -- I will put together a Promela
model that does a full state-space search for races.
I do have one fall-back, namely putting both counters into a single
aligned long. But I would like to avoid this, since there might be
architectures out there that cannot cleanly store into half-longs.
Such architectures would have to use atomic ops. :-/
> > > While doing qrcu, somehow I convinced myself we can't optimize out taking
> > > qp->mutex. Now I think I was wrong. Good!
> >
> > Me, I didn't want to worry about it unless someone needed it. Which
> > it now appears they do. ;-)
>
> No. I do remember I tried hard to optimize out taking qp->mutex, but failed.
> So I decided it is not possible. And now you show that I just don't have enough
> brains! (of course, I hate you :)
Coming from you, that is high praise indeed!!! ;-)
Now if it really does work...
> > > Q: you deleted "if (atomic_read(qp->ctr + idx) == 1)" fastpath under ->mutex,
> > > was this needed for this optimization to work? I am asking because I can't
> > > understand how it can make any difference.
> >
> > Before, we held the lock, so we could just check the single current
> > element. Now we don't hold the lock, so we need to check both elements.
> > So I replaced the "if (atomic_read(qp->ctr + idx) == 1)" with the
> > nested "if" statements that test both elements.
>
> Ah, my question was different. The current version of qrcu does
>
> mutex_lock(&qp->mutex);
>
> idx = qp->completed & 0x1;
> if (atomic_read(qp->ctr + idx) == 1) // fast path
> return;
>
> ...
>
> and it seems to me that we can retain this fastpath even with your optimization,
> no? Surely, it is not so important, but it is nearly free.
Ah! This does make sense, excellent point!!!
> Paul, could you make a patch? (I'll do rcutorture test tomorrow, but I only have
> P-4 ht).
I will do the Promela model, and if that works, I will submit a patch.
Thanx, Paul
> Peter, do you think you can use qrcu?
>
> Oleg.
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/