Re: [patch] sched: unlocked context-switches

From: Ingo Molnar
Date: Sat Apr 09 2005 - 01:57:41 EST



* Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote:

> Well that does look like a pretty good cleanup. It certainly is the
> final step in freeing complex architecture switching code from
> entanglement with scheduler internal locking, and unifies the locking
> scheme.
>
> I did propose doing unconditionally unlocked switches a while back
> when my patch first popped up - you were against it then, but I guess
> you've had second thoughts?

the reordering of switch_to() and the switch_mm()-related logic was that
made it really worthwile and clean. I.e. we pick a task atomically, we
switch stacks, and then we switch the MM. Note that this setup still
leaves the possibility open to move the stack-switching back under the
irq-disabled section in a natural way.

> It does add an extra couple of stores to on_cpu, and a wmb() for
> architectures that didn't previously need the unlocked switches. And
> ia64 needs the extra interrupt disable / enable. Probably worth it?

it also removes extra stores to rq->prev_mm and other stores. I havent
measured any degradation on x86.

If the irq disable/enable becomes widespread i'll do another patch to
push the irq-enabling into switch_to() so the arch can do the
stack-switch first and then enable interrupts and do the rest - but i
didnt want to complicate things unnecessarily for now.

> Minor style request: I like that you're accessing ->on_cpu through
> functions so the !SMP case doesn't clutter the code with ifdefs... but
> can you do set_task_on_cpu(p) and clear_task_on_cpu(p) ?

yeah, i thought about these two variants and went for set_task_on_cpu()
so that it's less encapsulated (it's really just a conditional
assignment) and that it parallels set_task_cpu() use. But no strong
feelings either way. Anyway, lets try what we have now, i'll do the rest
in deltas.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/