Re: frequent lockups in 3.18rc4

From: Thomas Gleixner
Date: Wed Dec 03 2014 - 15:37:46 EST


On Wed, 3 Dec 2014, John Stultz wrote:
> On Wed, Dec 3, 2014 at 11:25 AM, Linus Torvalds
> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> > On Wed, Dec 3, 2014 at 11:00 AM, Dave Jones <davej@xxxxxxxxxx> wrote:
> >>
> >> So right after sending my last mail, I rebooted, and restarted the run
> >> on the same kernel again.
> >>
> >> As I was writing this mail, this happened.
> >>
> >> [ 524.420897] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [trinity-c178:20182]
> >>
> >> and that's all that made it over the console. I couldn't log in via ssh,
> >> and thought "ah-ha, so it IS bad". I walked over to reboot it, and
> >> found I could actually log in on the console. check out this dmesg..
> >>
> >> [ 503.683055] Clocksource tsc unstable (delta = -95946009388 ns)
> >> [ 503.692038] Switched to clocksource hpet
> >> [ 524.420897] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [trinity-c178:20182]
> >
> > Interesting. That whole NMI watchdog thing happens pretty much 22s
> > after the "TSC unstable" message.
> >
> > Have you ever seen that TSC issue before? The watchdog relies on
> > comparing get_timestamp() differences, so if the timestamp was
> > incorrect...
> >
> > Maybe that whole "clocksource_watchdog()" is bogus. That delta is
> > about 96 seconds, sounds very odd. I'm not seeing how the TSC could
> > actually scew up that badly, so I'd almost be more likely to blame the
> > "watchdog" clock.
> >
> > I don't know. This piece of code:
> >
> > delta = clocksource_delta(wdnow, cs->wd_last, watchdog->mask);
> >
> > makes no sense to me. Shouldn't it be
> >
> > delta = clocksource_delta(wdnow, watchdog->wd_last, watchdog->mask);
>
> So we store wdnow value in the cs->wd_last a few lines below, so I
> don't think that's problematic.
>
> I do recall seeing problematic watchdog behavior back in the day w/
> PREEMPT_RT when a high priority task really starved the watchdog for a
> long time. When we came back the hpet had wrapped, making the wd_delta
> look quite small relative to the TSC delta, causing improper
> disqualification of the TSC.

Right, that resulted in a delta > 0. I have no idea how we could
create a negative delta via wrapping the HPET around, i.e. HPET being
96 seconds ahead of TSC.

This looks more like a genuine TSC wreckage. So we have these possible
causes:

1) SMI

2) Power states

3) Writing to the wrong MSR

So I assume that 1/2 are a non issue. They should surface in normal
non fuzzed operation as well.

Dave, does that TSC unstable thing always happen AFTER you started
fuzzing? If yes, what is the fuzzer doing this time?

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/