Re: [patch] queued spinlocks (i386)

From: Nick Piggin
Date: Thu Mar 29 2007 - 21:53:56 EST


On Thu, Mar 29, 2007 at 10:42:13PM +0400, Oleg Nesterov wrote:
> On 03/28, Nick Piggin wrote:
> >
> > Well with my queued spinlocks, all that lockbreak stuff can just come out
> > of the spin_lock, break_lock out of the spinlock structure, and
> > need_lockbreak just becomes (lock->qhead - lock->qtail > 1).
>
> Q: queued spinlocks are not CONFIG_PREEMPT friendly,

I consider the re-enabling of preemption and interrupts to be a hack
anyway. Because if you already have interrupts or preemption disabled
at entry time, they will remain disabled.

IMO the real solution is to ensure spinlock critical sections don't get
too large, and perhaps use fair spinlocks to prevent starvation.

>
> > + asm volatile(LOCK_PREFIX "xaddw %0, %1\n\t"
> > + : "+r" (pos), "+m" (lock->qhead) : : "memory");
> > + while (unlikely(pos != lock->qtail))
> > + cpu_relax();
>
> once we incremented lock->qhead, we have no optiion but should spin with
> preemption disabled until pos == lock->qtail, yes?

Correct. For the purposes of deadlock behaviour, we have effectively
taken the lock at that point.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/