[PATCHv5 2/3] mm: reduce atomic use on use_mm fast path

From: Michael S. Tsirkin
Date: Thu Aug 27 2009 - 12:10:57 EST


When mm switched to matches that of active mm, we don't need to
increment and then drop the mm count. Making that conditional reduces
contention on that cache line on SMP systems.

Acked-by: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
---
mm/mmu_context.c | 9 ++++++---
1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/mmu_context.c b/mm/mmu_context.c
index 9989c2f..0777654 100644
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm)

task_lock(tsk);
active_mm = tsk->active_mm;
- atomic_inc(&mm->mm_count);
+ if (active_mm != mm) {
+ atomic_inc(&mm->mm_count);
+ tsk->active_mm = mm;
+ }
tsk->mm = mm;
- tsk->active_mm = mm;
switch_mm(active_mm, mm, tsk);
task_unlock(tsk);

- mmdrop(active_mm);
+ if (active_mm != mm)
+ mmdrop(active_mm);
}
EXPORT_SYMBOL_GPL(use_mm);

--
1.6.2.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/