On Thu, Jun 20, 2002 at 04:40:33PM +0200, Andrea Arcangeli wrote:
> however I noticed an smp bug in my changes, I was too aggressive
> removing the loop in task_rq_lock, not that such bug ever triggered yet
> but the rq may change under us while we take the lock if the task is
> getting migrated to another cpu.
just for reference, here it is the fix:
--- sched/kernel/sched.c.~1~ Thu Jun 20 16:42:41 2002
+++ sched/kernel/sched.c Thu Jun 20 16:43:36 2002
@@ -133,19 +133,13 @@ static inline runqueue_t *task_rq_lock(t
{
struct runqueue *rq;
- /*
- * 2.4 cannot be made preemptive or it can trigger preemption bugs all
- * over the place (just check the networking per-cpu data), so it's
- * pointless to disable irq before reading the current runqueue address.
- */
+repeat_lock_task:
rq = task_rq(p);
spin_lock_irqsave(&rq->lock, *flags);
- if (unlikely(rq != task_rq(p)))
- /*
- * Bug just in case somebody made the 2.4 kernel non preemptive
- * as an experiment on a non production system.
- */
- BUG();
+ if (unlikely(rq != task_rq(p))) {
+ spin_unlock_irqrestore(&rq->lock, *flags);
+ goto repeat_lock_task;
+ }
return rq;
}
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Jun 23 2002 - 22:00:22 EST