Re: [patch] queued spinlocks (i386)
From: Oleg Nesterov
Date: Sun Mar 25 2007 - 12:50:58 EST
I am sorry for being completely off-topic, but I've been wondering for the
long time...
What if we replace raw_spinlock_t.slock with "struct task_struct *owner" ?
void _spin_lock(spinlock_t *lock)
{
struct task_struct *owner;
for (;;) {
preempt_disable();
if (likely(_raw_spin_trylock(lock)))
break;
preempt_enable();
while (!spin_can_lock(lock)) {
rcu_read_lock();
owner = lock->owner;
if (owner && current->prio < owner->prio &&
!test_tsk_thread_flag(owner, TIF_NEED_RESCHED))
set_tsk_thread_flag(owner, TIF_NEED_RESCHED);
rcu_read_unlock();
cpu_relax();
}
}
lock->owner = current;
}
void _spin_unlock(spinlock_t *lock)
{
lock->owner = NULL;
_raw_spin_unlock(lock);
preempt_enable();
}
Now we don't need need_lockbreak(lock), need_resched() is enough, and we take
->prio into consideration.
Makes sense? Or stupid?
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/