Re: A couple of TSC questions

From: Paul E. McKenney
Date: Tue Mar 28 2023 - 17:59:00 EST


On Mon, Mar 27, 2023 at 10:19:54AM +0800, Feng Tang wrote:
> On Fri, Mar 24, 2023 at 05:47:33PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 22, 2023 at 01:14:48PM +0800, Feng Tang wrote:
> > > Hi, Paul
> > >
> > > On Tue, Mar 21, 2023 at 04:23:28PM -0700, Paul E. McKenney wrote:
> > > > Hello, Feng!
> > > >
> > > > I hope that things are going well for you and yours!
> > >
> > > Thanks!
> > >
> > > > First, given that the kernel can now kick out HPET instea of TSC in
> > > > response to clock skew, does it make sense to permit recalibration of
> > > > the still used TSC against the marked-unstable HPET?
> > >
> > > Yes, it makes sense to me. I don't know the detail of the case, if
> > > the TSC frequency comes from CPUID info, a recalibration against a
> > > third party HW timer like ACPI_PM should help here.
> > >
> > > A further thought is if there are really quite some case that the
> > > CPUID-provided TSC frequency info is not accurate, then we may need
> > > to enable the recalibration by default, and give a warning message
> > > when detecting any mismatch.
> >
> > Now that you mention it, it is quite hard to choose correctly within
> > the kernel. To do it right seems to require that NTP information be
> > pushed into the kernel.
>
> Yes, we need a 'always-right' reference, but the system have to has
> network access.
>
> I know there have been many different problems related to TSC, but
> the real HW/FW related problems are only about the accuracy of
> TSC frequency's calibration/calculation.
>
> Before commit b50db7095fe0 ("x86/tsc: Disable clocksource watchdog
> for TSC on qualified platorms"), if the TSC freq is calculated
> from CPUID or MSR, the HPET/ACPI_PM_TIMER can detect the possible
> calculation problem during clocksource watchdog check. For this
> case, we may need to force the recalibration by HPET/ACPI_PM_TIMER.

Agreed, one possible assumption is that TSC, HPET, and ACPI_PM_TIMER
are very unlikely to be in error in exactly the same way.

> > > > Second, we are very occasionally running into console messages like this:
> > > >
> > > > Measured 2 cycles TSC warp between CPUs, turning off TSC clock.
> > > >
> > > > This comes from check_tsc_sync_source() and indicates that one CPU's
> > > > TSC read produced a later time than a later read from some other CPU.
> > > > I am beginning to suspect that these can be caused by unscheduled delays
> > > > in the TSC synchronization code, but figured I should ask you if you have
> > > > ever seen these. And of course, if so, what the usual causes might be.
> > >
> > > I haven't seen this error myself or got similar reports. Usually it
> > > should be easy to detect once happened, as falling back to HPET
> > > will trigger obvious performance degradation.
> >
> > And that is exactly what happened. ;-)
> >
> > > Could you give more detail about when and how it happens, and the
> > > HW info like how many sockets the platform has.
> >
> > We are in early days, so I am checking for other experiences.
> >
> > > CC Thomas, Waiman, as they discussed simliar case here:
> > > https://lore.kernel.org/lkml/87h76ew3sb.ffs@tglx/T/#md4d0a88fb708391654e78312ffa75b481690699f
> >
> > Fun! ;-)

Waiman, do you recall what fraction of the benefit was provided by the
first patch, that is, the one that grouped the sync_lock, last_tsc,
max_warp, nr_warps, and random_warps global variables into a single
struct?

Thanx, Paul