Re: Discussion: quick_pit_calibrate is slow
From: George Spelvin
Date: Wed Jun 10 2015 - 11:45:11 EST
Ingo Molnar <mingo.kernel.org@xxxxxxxxx> wrote:
>* George Spelvin <linux@xxxxxxxxxxx> wrote:
> Btw., we should make the patch fixing Adrian's system patch 0, as it
> looks simple and obvious enough, agreed?
Agreed. The one that just fails the quick calibration if it has
no chance of succeeding.
>> Sonds good, but when do we get to the decruftification? I'd prefer to
>> prepare the final patch (if nothing else, so Linus will be reassured
>> by the diffstat), although I can see holding it back for a few releases.
> what do you call 'decruftification'? So I'd suggest besides obvious bug
> (and comment) fixes we should not change the current calibration code but
> add your new logic as the first step, and preserve those other methods,
> because we know that it works reasonably well on a wide range of hardware.
I mean getting rid of the current code, which works but (as I explained
in detail) is seriously sub-optimal.
I absolutely agree with being cautious, but I also don't want to produce a
software volcano, adding another layer of lava on top of what's already
there because I'm scared to disturb the mess.
I've seen some rants before castigating people for working around problems
in a driver, rather than fixing the offending core kernel code.
My hope in this discussion is to come to an agreement that an overhaul
is justified. (Subject to revision based on experience, of course;
It may turn out my attempted fix is a complete shambles.)
> Once your bits work fine will there be any need for any of the two other
> PIT based calibration methods? We can simply remove them at a suitable
> point in time and have a single (and by definition non-crufty) piece of
> PIT calibration code.
If my plan survives contact with reality, it should work better 100%
of the time and obsolete the old code.
You said it should fail back to the old code, but there's not a lot of
difference between failures I can detect and failures I can work around.
I know it's a fatal error to underestimate the perversity of PC hardware
(virtualized hardware even more so) but I'm trying to mitigate that by
really deeply understanding what the current code does and all the error
conditions is can handle.
> (and RTC if that can be managed.)
Adding a new hardware dependency is the riskiest part, but it would be
nice to have a santy check for Atom. (Which has an RTC, but no PIT.)
>> Definitely desired, but I have to be careful here. Obviously I can't
>> print during the timing loop, so it will take either a lot of memory,
>> or add significant computation to the loop.
> So in theory you can use a tracepoint and enable boot tracing. Not sure it's
> initialized that early in the bootup, has to be explored ...
I definitely appreciate whatever you can suggest here, but I'm not sure
that's quite the solution. Tracepoints are designed to be extremely
lightweight when off, but I need lightweight when *on* so I don't distort
the timing and create heisenbugs.
And boot tracing is all about timing, which doesn't start working until
TSC calibration completes (I've been snipping the timestamps off the
debug logs I print because they're all "[ 0.000000]", e.g.:
} [ 0.000000] RTC sample 106: 3875
} [ 0.000000] RTC sample 107: 3877
} [ 0.000000] RTC sample 108: 3875
} [ 0.000000] tsc: Fast TSC calibration using PIT: 254(18360)..150(18360)
} [ 0.000000] Using user defined TSC freq: 3399.965 MHz
} [ 0.000000] tsc: Detected 3399.965 MHz processor
} [ 0.000020] Calibrating delay loop (skipped), value calculated using timer frequency.. 6802.26 BogoMIPS (lpj=11333216)
I just had the flash of insight to collect all the timetamps always, and
*either* preserve them for debugging *or* use them as boot-time entropy
seed material and deallocate. It's not like there's any shortage of RAM.
But even better would be to find a useful summary, rather than just
up-ending the bit bucket over the victim's head.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/