On Tue, 17 Jul 2007, Ingo Molnar wrote:Spectacularly no! With this patch the "glitch1" script with multiple scrolling windows has all xterms and glxgears stop totally dead for ~200ms once per second. I didn't properly test anything else after that. Since the automount issue doesn't seem to start until something kicks it off, I didn't see it but that doesn't mean it's fixed.
* Ian Kent <raven@xxxxxxxxxx> wrote:
In several places I have code similar to:
wait.tv_sec = time(NULL) + 1;
wait.tv_nsec = 0;
Ok, that definitely should work.
Does the patch below help?
Hope that info helps.ah! It passes in a low-res time source into a high-res time interface (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to time(NULL) + 2, or change it to:
gettimeofday(&wait, NULL);
wait.tv_sec++;
This is wrong. It's wrong for two reasons:
- it really shouldn't be needed. I don't think "time()" has to be *exactly* in sync, but I don't think it can be off by a third of a second or whatever (as the "30% CPU load" would seem to imply)
- gettimeofday works on a timeval, pthread_cond_timedwait() works on a timespec.
So if it actually makes a difference, it makes a difference for the *wrong* reason: the time is still totally nonsensical in the tv_nsec field (because it actually got filled in with msecs!), but now the tv_sec field is in sync, so it hides the bug.
Anyway, hopefully the patch below might help. But we probably should make this whole thing a much more generic routine (ie we have our internal "getnstimeofday()" that still is missing the second-overflow logic, and that is quite possibly the one that triggers the "30% off" behaviour).
Ingo, I'd suggest:
- ger rid of "timespec_add_ns()", or at least make it return a return value for when it overflows.
- make all the people who overflow into tv_sec call a "fix_up_seconds()" thing that does the xtime overflow handling.
Linus