On Wed, 2015-04-29 at 14:14 -0400, Waiman Long wrote:
On 04/28/2015 04:00 PM, Jason Low wrote:Right, the READ and WRITE operations will still be done separately and
The p->mm->numa_scan_seq is accessed using READ_ONCE/WRITE_ONCEREAD_ONCE followed by a WRITE_ONCE won't stop load/store tearing from
and modified without exclusive access. It is not clear why it is
accessed this way. This patch provides some documentation on that.
Signed-off-by: Jason Low<jason.low2@xxxxxx>
---
kernel/sched/fair.c | 12 ++++++++++++
1 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5a44371..794f7d7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1794,6 +1794,11 @@ static void task_numa_placement(struct task_struct *p)
u64 runtime, period;
spinlock_t *group_lock = NULL;
+ /*
+ * The p->mm->numa_scan_seq gets updated without
+ * exclusive access. Use READ_ONCE() here to ensure
+ * that the field is read in a single access.
+ */
seq = READ_ONCE(p->mm->numa_scan_seq);
if (p->numa_scan_seq == seq)
return;
@@ -2107,6 +2112,13 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
static void reset_ptenuma_scan(struct task_struct *p)
{
+ /*
+ * We only did a read acquisition of the mmap sem, so
+ * p->mm->numa_scan_seq is written to without exclusive access.
+ * That's not much of an issue though, since this is just used
+ * for statistical sampling. Use WRITE_ONCE and READ_ONCE, which
+ * are not expensive, to avoid load/store tearing.
+ */
WRITE_ONCE(p->mm->numa_scan_seq, READ_ONCE(p->mm->numa_scan_seq) + 1);
p->mm->numa_scan_offset = 0;
}
happening unless you use an atomic instruction to do the increment. So I
think your comment may be a bit misleading.
won't be atomic. Here, we're saying that this prevents load/store
tearing on each of those individual write/read operations. Please let me
know if you prefer this to be worded differently.