[PATCH] x86/mm: Improve switch_mm barrier comments
From: Andy Lutomirski
Date: Tue Jan 12 2016 - 15:12:06 EST
My previous comments were still a bit confusing and there was a
typo. Fix it up.
Reported-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx>
arch/x86/include/asm/mmu_context.h | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 1edc9cd198b8..e08d27369fce 100644
@@ -132,14 +132,14 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
* be sent, and CPU 0's TLB will contain a stale entry.)
* The bad outcome can occur if either CPU's load is
- * reordered before that CPU's store, so both CPUs much
+ * reordered before that CPU's store, so both CPUs must
* execute full barriers to prevent this from happening.
* Thus, switch_mm needs a full barrier between the
* store to mm_cpumask and any operation that could load
* from next->pgd. This barrier synchronizes with
- * remote TLB flushers. Fortunately, load_cr3 is
- * serializing and thus acts as a full barrier.
+ * remote TLB flushers. Fortunately, cpumask_set_cpu is
+ * (on x86) a full barrier, and load_cr3 is serializing.
@@ -188,9 +188,10 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
* tlb flush IPI delivery. We must reload CR3
* to make sure to use no freed page tables.
- * As above, this is a barrier that forces
- * TLB repopulation to be ordered after the
- * store to mm_cpumask.
+ * As above, either of cpumask_set_cpu and
+ * load_cr3 is a sufficient barrier to force TLB
+ * repopulation to be ordered after the store to
+ * mm_cpumask.