Re: [RFC][PATCH 5/5] sched: Reduce ttwu rq->lock contention
From: Yan, Zheng
Date: Thu Dec 16 2010 - 22:06:26 EST
On Fri, Dec 17, 2010 at 4:32 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> @@ -953,7 +955,7 @@ static inline struct rq *__task_rq_lock(
> for (;;) {
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> - if (likely(rq == task_rq(p)))
> + if (likely(rq == task_rq(p)) && !task_is_waking(p))
> return rq;
> raw_spin_unlock(&rq->lock);
> }
> @@ -973,7 +975,7 @@ static struct rq *task_rq_lock(struct ta
> local_irq_save(*flags);
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> - if (likely(rq == task_rq(p)))
> + if (likely(rq == task_rq(p)) && !task_is_waking(p))
> return rq;
> raw_spin_unlock_irqrestore(&rq->lock, *flags);
> }
Looks like nothing prevents ttwu() from changing task's CPU while
some one else is holding task_rq_lock(). Is this OK?
Thanks
Yan, Zheng
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/