Re: [RFC][PATCH 5/5] sched: Reduce ttwu rq->lock contention

From: Yong Zhang
Date: Sat Dec 18 2010 - 09:49:26 EST


On Sat, Dec 18, 2010 at 2:15 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Fri, 2010-12-17 at 18:43 +0100, Peter Zijlstra wrote:
>>
>> Hrmph, so is it only about serializing concurrent wakeups? If so, we
>> could possibly hold p->pi_lock over the wakeup.
>
> Something like the below.. except it still suffers from the
> __migrate_task() hole you identified in your other email.
>
> By fully serializing all wakeups using ->pi_lock it becomes a lot
> simpler (although I just realized we might have a problem with
> try_to_wake_up_local).
>
> static int
> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> {
> Â Â Â Âunsigned long flags;
> Â Â Â Âint cpu, ret = 0;
>
> Â Â Â Âsmp_wmb();
> Â Â Â Âraw_spin_lock_irqsave(&p->pi_lock, flags);
>
> Â Â Â Âif (!(p->state & state))
> Â Â Â Â Â Â Â Âgoto unlock;
>
> Â Â Â Âret = 1; /* we qualify as a proper wakeup now */

Could below happen in this __window__?

p is going through wake_event and it first set TASK_UNINTERRUPTIBLE,
then waker see that and above if (!(p->state & state)) passed.
But at this time condition == true for p, and p return to run and
intend to sleep:
p->state == XXX;
sleep;

then we could wake up a process which has wrong state, no?

>
> Â Â Â Âif (p->se.on_rq && ttwu_force(p, state, wake_flags))
> Â Â Â Â Â Â Â Âgoto unlock;
>
> Â Â Â Âp->sched_contributes_to_load = !!task_contributes_to_load(p);
>
> Â Â Â Â/*
> Â Â Â Â * In order to serialize against other tasks wanting to task_rq_lock()
> Â Â Â Â * we need to wait until the current task_rq(p)->lock holder goes away,
> Â Â Â Â * so that the next might observe TASK_WAKING.
> Â Â Â Â */
> Â Â Â Âp->state = TASK_WAKING;
> Â Â Â Âsmp_wmb();
> Â Â Â Âraw_spin_unlock_wait(&task_rq(p)->lock);
>
> Â Â Â Â/*
> Â Â Â Â * Stable, now that TASK_WAKING is visible.
> Â Â Â Â */
> Â Â Â Âcpu = task_cpu(p);
>
> #ifdef CONFIG_SMP
> Â Â Â Â/*
> Â Â Â Â * Catch the case where schedule() has done the dequeue but hasn't yet
> Â Â Â Â * scheduled to a new task, in that case p is still being referenced
> Â Â Â Â * by that cpu so we cannot wake it to any other cpu.
> Â Â Â Â *
> Â Â Â Â * Here we must either do a full remote enqueue, or simply wait for
> Â Â Â Â * the remote cpu to finish the schedule(), the latter was found to
> Â Â Â Â * be cheapest.
> Â Â Â Â */
> Â Â Â Âwhile (p->oncpu)
> Â Â Â Â Â Â Â Âcpu_relax();
>
> Â Â Â Âif (p->sched_class->task_waking)
> Â Â Â Â Â Â Â Âp->sched_class->task_waking(p);
>
> Â Â Â Âcpu = select_task_rq(p, SD_BALANCE_WAKE, wake_flags);
> #endif
> Â Â Â Âttwu_queue(p, cpu);
> Â Â Â Âttwu_stat(p, cpu, wake_flags);
> unlock:
> Â Â Â Âraw_spin_unlock_irqrestore(&p->pi_lock, flags);
>
> Â Â Â Âreturn ret;
> }
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
> Please read the FAQ at Âhttp://www.tux.org/lkml/
>



--
Only stand for myself.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/