Re: [RFC][PATCH 17/17] sched: Sort hotplug vs ttwu queueing

From: Yong Zhang
Date: Wed Dec 29 2010 - 09:52:12 EST


On Fri, Dec 24, 2010 at 01:23:55PM +0100, Peter Zijlstra wrote:
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> ---
> kernel/sched.c | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -2470,15 +2470,15 @@ static int ttwu_remote(struct task_struc
> return ret;
> }
>
> -void sched_ttwu_pending(void)
> +static void __sched_ttwu_pending(struct rq *rq)
> {
> #ifdef CONFIG_SMP
> - struct rq *rq = this_rq();
> struct task_struct *list = xchg(&rq->wake_list, NULL);
>
> if (!list)
> return;
>
> + rq = this_rq(); /* always enqueue locally */

But it's possible that p is not allowed to run on this cpu,
right?

Thanks,
Yong

> raw_spin_lock(&rq->lock);
>
> while (list) {
> @@ -2491,6 +2491,11 @@ void sched_ttwu_pending(void)
> #endif
> }
>
> +void sched_ttwu_pending(void)
> +{
> + __sched_ttwu_pending(this_rq());
> +}
> +
> #ifdef CONFIG_SMP
> static void ttwu_queue_remote(struct task_struct *p, int cpu)
> {
> @@ -6162,6 +6167,17 @@ migration_call(struct notifier_block *nf
> migrate_nr_uninterruptible(rq);
> calc_global_load_remove(rq);
> break;
> +
> + case CPU_DEAD:
> + /*
> + * Queue any possible remaining pending wakeups on this cpu.
> + * Load-balancing will sort it out eventually.
> + */
> + local_irq_save(flags);
> + __sched_ttwu_pending(cpu_rq(cpu));
> + local_irq_restore(flags);
> + break;
> +
> #endif
> }
> return NOTIFY_OK;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/