Re: Apparent backward time travel in timestamps on file creation

From: Linus Torvalds
Date: Thu Mar 30 2017 - 15:52:27 EST


On Thu, Mar 30, 2017 at 12:35 PM, David Howells <dhowells@xxxxxxxxxx> wrote:
> Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>> The difference can be quite noticeable - basically the
>> "gettimeofday()" time will interpolate within timer ticks, while
>> "xtime" is just the truncated "time at timer tick" value _without_ the
>> correction.
>
> Is there any way to determine the error bar, do you know? Or do I just make
> up a fudge factor?

Hmm. The traditional error bar is just HZ. Which depends on
architecture, but I don't think we've ever had anything below 100 on
x86 (on other architectures it's been lower, like 24Hz on MIPS).

I guess you could have missed timer ticks that makes it worse - we do
have code to try to handle things like that.

But in general, I think you should be able to more or less rely on
that 100Hz error bar.

I don't think tickless really changes that, because even the full
tickless keeps ticks going on _some_ CPU to make xtime work. John
should know the details better. Do we ever slow down from that?

> Obviously the filesystem truncation would need to be taken into account (and
> this can be different for different timestamps within the same filesystem).

Yeah, for some filesystems the truncation issue is going to be the
bigger one (eg truncate to one second or something).

But likely those are not the ones you'd care to necessarily test. FAT?

Linus