[...]
> But compare: (i) kernel 11 minute mode: every 11 minutes the system time
> is written to the RTC. (ii) user space 11 minute mode: every 11 minutes
> RTC is read out and the pair (system time, RTC) is written to file.
> Some time later, for example after a reboot, we want to know the time
> but do not have (yet) a good outside reference.
> Your solution: it is found in RTC.
> My solution: it is found in RTC adjusted by the known offset and drift.
>
> You see, your solution is a zero-th order approximation.
> It is perfect when the drift is zero.
> My solution is a first order approximation.
> It is perfect when the drift is constant.
Neither of the above is correct: You can compensate the drift in the
kernel if you want to, and the drift is not constant. Your solution
mainly deals with the situation when the computer is turned off (You
try to estimate what happened when the computer was off). I try to
deal with the situation when the computer is turned on and
synchronized. Do forgot one thing: Normally the RTC update is OFF,
and it will periodically be disabled unless YOU turn it on.
I don't want to discuss the best method of synchronizing your clock
here.
>
> There are no circumstances where kernel 11 minute mode is better.
It's transparent for the (portable) user process.
>
> > "calculate the actual time" is an interesting concept. Maybe you can
> > explain how it works, especially if a system is up several months. My
> > RTC drifts about a minute per week during summer...
>
> I think you can solve linear equations yourself.
You are trying to calibrate one bad thing (system timer) with another
bad thing (RTC), claiming it does not make sense to set one of the
bad things (RTC) to the correct time (completely independent on how
that correct time is determined).
Basically you are saying your algorithm is the only true solution. I
once also synchronized all the computers in the same room to each
other, and they were perfectly stable, because they were all drifting
in the same direction. The (in)stability was 0.000, but whenever I
synchronize to GPS/PPS the (in)stability is at least 0.010, and the
slightest load or temperature change will increase it to about 0.100
(PPM).
>
> It looks like you are arguing that a first order approximation is not perfect
> and use that to defend a zeroth order approximation...
I'm just claiming that the clocks should be set if an application
wants the clocks to be set. No more, no less. The rest is to support
your point of view, but I never said that.
Maybe implement a POSIX.4 compatible CLOCK_CMOS_RTC with one second
resolution, and everybody will be fine. Surprisingly this would have
to be done in the kernel, strange enough.
Regards,
Ulrich
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/