Re: frequent lockups in 3.18rc4
From: Don Zickus
Date: Thu Nov 20 2014 - 15:38:34 EST
On Thu, Nov 20, 2014 at 11:43:07AM -0800, Linus Torvalds wrote:
> On Thu, Nov 20, 2014 at 7:25 AM, Dave Jones <davej@xxxxxxxxxx> wrote:
> >
> > Disabling CONTEXT_TRACKING didn't change the problem.
> > Unfortunatly the full trace didn't make it over usb-serial this time. Grr.
> >
> > Here's what came over serial..
> >
> > NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [trinity-c35:11634]
> > RIP: 0010:[<ffffffff88379605>] [<ffffffff88379605>] copy_user_enhanced_fast_string+0x5/0x10
> > RAX: ffff880220eb4000 RBX: ffffffff887dac64 RCX: 0000000000006a18
> > RDX: 000000000000e02f RSI: 00007f766f466620 RDI: ffff88016f6a7617
> > RBP: ffff880220eb7f78 R08: 8000000000000063 R09: 0000000000000004
> > Call Trace:
> > [<ffffffff882f4225>] ? SyS_add_key+0xd5/0x240
> > [<ffffffff8837adae>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> > [<ffffffff887da092>] system_call_fastpath+0x12/0x17
>
> Ok, that's just about half-way in a ~57kB memory copy (you can see it
> in the register state: %rdx contains the original size of the key
> payload, rcx contains the current remaining size: 57kB total, 27kB
> left).
>
> And it's holding absolutely zero locks, and not even doing anything
> odd. It wasn't doing anything particularly odd before either, although
> the kmalloc() of a 64kB area might just have caused a fair amount of
> VM work, of course.
Just for clarification, softlockups are processes hogging the cpu (thus
blocking the high priority per-cpu watchdog thread).
Hardlockups on the other hand are cpus with interrupts disabled for too
long (thus blocking the timer interrupt).
The might coincide with your scheduler theory below. Don't know.
Cheers,
Don
>
> You know what? I'm seriously starting to think that these bugs aren't
> actually real. Or rather, I don't think it's really a true softlockup,
> because most of them seem to happen in totally harmless code.
>
> So I'm wondering whether the real issue might not be just this:
>
> [loadavg: 164.79 157.30 155.90 37/409 11893]
>
> together with possibly a scheduler issue and/or a bug in the smpboot
> thread logic (that the watchdog uses) or similar.
>
> That's *especially* true if it turns out that the 3.17 problem you saw
> was actually a perf bug that has already been fixed and is in stable.
> We've been looking at kernel/smp.c changes, and looking for x86 IPI or
> APIC changes, and found some harmlessly (at least on x86) suspicious
> code and this exercise might be worth it for that reason, but what if
> it's really just a scheduler regression.
>
> There's been a *lot* more scheduler changes since 3.17 than the small
> things we've looked at for x86 entry or IPI handling. And the
> scheduler changes have been about things like overloaded scheduling
> groups etc, and I could easily imaging that some bug *there* ends up
> causing the watchdog process not to schedule.
>
> Hmm? Scheduler people?
>
> Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/