On Wednesday 15 November 2006 00:36, Suleiman Souhlal wrote:
This is done by a per-cpu vxtime structure that stores the last TSC and HPET
values.
Whenever we switch to a userland process after a HLT instruction has been
executed or after the CPU frequency has changed, we force a new read of the
TSC, HPET and xtime so that we know the correct frequency we have to deal
with.
We also force a resynch once every second, on every CPU.
Hmm, not sure we want to do it this way. Especially since you
got unsolved races. But patch review ignoring the 10k foot picture again.
+
+ /*
+ * If we are switching away from a process in vsyscall, touch
+ * the vxtime seq lock so that userland is aware that a context switch
+ * has happened.
+ */
+ rip = *(unsigned long *)(prev->rsp0 +
+ offsetof(struct user_regs_struct, rip) - sizeof(struct pt_regs));
+ if (unlikely(rip > VSYSCALL_START) && unlikely(rip < VSYSCALL_END)) {
+ write_seqlock(&vxtime.vx_seq);
+ write_sequnlock(&vxtime.vx_seq);
+ }
+
Can't this starve? If a process is unlucky enough (e.g. from lots of interrupts) that it can't go through the vsyscall without at least one context switch it will never finish. Ok maybe it's an unlikely enough livelock, but it still makes me uncomfortable.
- .vxtime : AT(VLOAD(.vxtime)) { *(.vxtime) }
- vxtime = VVIRT(.vxtime);
-
.vgetcpu_mode : AT(VLOAD(.vgetcpu_mode)) { *(.vgetcpu_mode) }
vgetcpu_mode = VVIRT(.vgetcpu_mode);
@@ -119,6 +116,9 @@ #define VVIRT(x) (ADDR(x) - VVIRT_OFFSET
.vsyscall_2 ADDR(.vsyscall_0) + 2048: AT(VLOAD(.vsyscall_2)) { *(.vsyscall_2) }
.vsyscall_3 ADDR(.vsyscall_0) + 3072: AT(VLOAD(.vsyscall_3)) { *(.vsyscall_3) }
+ .vxtime : AT(VLOAD(.vxtime)) { *(.vxtime) }
+ vxtime = VVIRT(.vxtime);
Why did you move it?
+long vgetcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache);
externs in c code are still forbidden, even if they don't have
extern