Re: [PATCH] x86/tsc: Add tsc_tuned_baseclk flag disabling CPUID.16h use for tsc calibration

From: Thomas Gleixner
Date: Mon Jan 20 2020 - 08:43:05 EST


Krzysztof,

Krzysztof Piecuch <piecuch@xxxxxxxxxxxxxx> writes:
> On Friday, January 17, 2020 4:37 PM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>> Wouldnât it be better to have an option tsc_max_refinement= to increase the 1%?
>
> All that is in the commends about it say that:
>
> * If there are any calibration anomalies (too many SMIs, etc),
> * or the refined calibration is off by 1% of the fast early
> * calibration, we throw out the new calibration and use the
> * early calibration.
>
> I still don't fully understand why the "1% rule" exists.

Simply because all of this is horribly fragile and if you put virt into
the picture it gets even worse.

The initial calibration via PIT/HPET is halfways accurate in most cases
and we use the 1% as a sanity check.

> Ideally it would be better to get the early calibration right than
> risk getting it wrong because of an "anomaly".

Ideally we would just have a way to read the stupid frequency from some
reliable place, but there is no such thing.

Guess why we have all this code, surely not because we have nothing
better to do than dreaming up a variety of weird ways to figure out that
frequency.

> OTOH if you system doesn't support any of the early calibration
> methods other than CPUID.16h (mine doesn't support either PIT or MSR)
> "tsc_max_refinement" would allow you to control max tsc_hz error.

Widening the error window here is clearly a hack. As you have to supply
a valid number there, then why not just providing the frequency itself
on the command line? That would at least make most sense and would avoid
to use completely wrong data in the early boot stage.

Thanks,

tglx