Re: [PATCH] sched,x86: optimize switch_mm for multi-threaded workloads

From: Linus Torvalds
Date: Wed Jul 31 2013 - 20:41:45 EST

On Wed, Jul 31, 2013 at 4:14 PM, Paul Turner <pjt@xxxxxxxxxx> wrote:
> We attached the following explanatory comment to our version of the patch:
> /*
> * In the common case (two user threads sharing mm
> * switching) the bit will be set; avoid doing a write
> * (via atomic test & set) unless we have to. This is
> * safe, because no other CPU ever writes to our bit
> * in the mask, and interrupts are off (so we can't
> * take a TLB IPI here.) If we don't do this, then
> * switching threads will pingpong the cpumask
> * cacheline.
> */

So as mentioned, the "interrupts will be off" is actually dubious.
It's true for the context switch case, but not for the activate_mm().

However, as Rik points out, activate_mm() is different in that we
shouldn't have any preexisting MMU state anyway. And besides, that
should never trigger the "prev == next" case.

But it does look a bit messy, and even your comment is a bit
misleading (it might make somebody think that all of switch_mm() is
protected from interrupts)
Anyway, I'm perfectly ok with the patch itself, but I just wanted to
make sure people had thought about these things.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at