Re: [PATCH] bug in futex unqueue_me

From: Ingo Molnar
Date: Sun Jul 30 2006 - 02:43:17 EST



* Christian Borntraeger <borntrae@xxxxxxxxxx> wrote:

> From: Christian Borntraeger <borntrae@xxxxxxxxxx>
>
> This patch adds a barrier() in futex unqueue_me to avoid aliasing of
> two pointers.
>
> On my s390x system I saw the following oops:

> So the code becomes more or less:
> if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
> instead of
> if (lock_ptr != 0) spin_lock(lock_ptr)
>
> Which caused the oops from above.

interesting, how is this possible? We do a spin_lock(lock_ptr), and
taking a spinlock is an implicit barrier(). So gcc must not delay
evaluating lock_ptr to inside the critical section. And as far as i can
see the s390 spinlock implementation goes through an 'asm volatile'
piece of code, which is a barrier already. So how could this have
happened? I have nothing against adding a barrier(), but we should first
investigate why the spin_lock() didnt act as a barrier - there might be
other, similar bugs hiding. (we rely on spin_lock()s barrier-ness in a
fair number of places)

> As a general note, this code of unqueue_me seems a bit fishy. The
> retry logic of unqueue_me only works if we can guarantee, that the
> original value of q->lock_ptr is always a spinlock (Otherwise we
> overwrite kernel memory). We know that q->lock_ptr can change. I dont
> know what happens with the original spinlock, as I am not an expert
> with the futex code.

yes, it is always a pointer to a valid spinlock, or NULL.
futex_requeue() can change the spinlock from one to another, and
wake_futex() can change it to NULL. The futex unqueue_me() fastpath is
when a futex waiter was woken - in which case it's NULL. But it can
still be non-NULL if we timed out or a signal happened, in which case we
may race with a wakeup or a requeue. futex_requeue() changes the
spinlock pointer if it holds both the old and the new spinlock. So it's
race-free as far as i can see.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/