On Tue, Jun 18, 2013 at 11:17:41AM -0400, KOSAKI Motohiro wrote:+#ifdef CONFIG_64BIT
+ /*
+ * 64-bit doesn't need locks to atomically read a 64bit value. So we
+ * have two optimization chances, 1) when caller doesn't need
+ * delta_exec and 2) when the task's delta_exec is 0. The former is
+ * obvious. The latter is complicated. reading ->on_cpu is racy, but
+ * this is ok. If we race with it leaving cpu, we'll take a lock. So
+ * we're correct. If we race with it entering cpu, unaccounted time
+ * is 0. This is indistinguishable from the read occurring a few
+ * cycles earlier.
+ */
+ if (!add_delta || !p->on_cpu)
+ return p->se.sum_exec_runtime;
I'm not sure this is correct from an smp ordering POV. p->on_cpu may appear
to be 0 whereas the task is actually running for a while and p->se.sum_exec_runtime
can then be past the actual value on the remote CPU.
Quate form Paul's last e-mail
Stronger:
+#ifdef CONFIG_64BIT
+ if (!p->on_cpu)
+ return p->se.sum_exec_runtime;
+#endif
[ Or !p->on_cpu || !add_delta ].
We can take the racy read versus p->on_cpu since:
If we race with it leaving cpu: we take lock, we're correct
If we race with it entering cpu: unaccounted time ---> 0, this is
indistinguishable from the read occurring a few cycles earlier.
Yeah, my worry was more about both p->on_cpu and p->se.sum_exec_runtime being
stale for too long. How much time can happen in the worst case before CPU X sees
the updates done by a CPU Y under rq(Y)->lock considering that CPU X doesn't take rq(Y)
to read that update? I guess it depends on the hardware, locking and ordering
that happened before.
Bah it probably doesn't matter in practice.