Re: [PATCH] sched,x86: optimize switch_mm for multi-threaded workloads

From: Rik van Riel
Date: Wed Jul 31 2013 - 21:58:29 EST


On 07/31/2013 08:41 PM, Linus Torvalds wrote:
On Wed, Jul 31, 2013 at 4:14 PM, Paul Turner <pjt@xxxxxxxxxx> wrote:
We attached the following explanatory comment to our version of the patch:

/*
* In the common case (two user threads sharing mm
* switching) the bit will be set; avoid doing a write
* (via atomic test & set) unless we have to. This is
* safe, because no other CPU ever writes to our bit
* in the mask, and interrupts are off (so we can't
* take a TLB IPI here.) If we don't do this, then
* switching threads will pingpong the cpumask
* cacheline.
*/

So as mentioned, the "interrupts will be off" is actually dubious.
It's true for the context switch case, but not for the activate_mm().

However, as Rik points out, activate_mm() is different in that we
shouldn't have any preexisting MMU state anyway. And besides, that
should never trigger the "prev == next" case.

But it does look a bit messy, and even your comment is a bit
misleading (it might make somebody think that all of switch_mm() is
protected from interrupts)
.
Anyway, I'm perfectly ok with the patch itself, but I just wanted to
make sure people had thought about these things.

Would you like me to document the things we found in the comment,
and resend a patch, or is the patch good as-is?

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/