Re: cpu softplug kernel hang

From: Luming Yu
Date: Tue Jul 13 2010 - 03:09:23 EST


On Fri, Jul 9, 2010 at 2:52 AM, Prarit Bhargava <prarit@xxxxxxxxxx> wrote:
> The panic below is from an 2.6.32-based kernel, however, AFAICT the same
> issue exists with the latest 2.6.35-rc3+ kernel.
>
> I have diagnosed the issue as being identical to the issue that I fixed
> with the Intel rngd
> driver sometime ago:
>
> http://marc.info/?l=linux-kernel&m=117275119001289&w=2
>
> When doing the following,
>
> while true; do
> Â Â Â Âfor i in `seq 12 23`; do echo 0 >
> /sys/devices/system/cpu/cpu$i/online; done
> Â Â Â Âsleep 5
> Â Â Â Âfor i in `seq 12 23`; do echo 1 >
> /sys/devices/system/cpu/cpu$i/online; done
> Â Â Â Âsleep 5
> done
>
> I see (with the nmi_watchdog enabled)
>
> BUG: NMI Watchdog detected LOCKUP on CPU11, ip ffffffff81029e72, registers:
> CPU 11
> Modules linked in: nfs lockd fscache nfs_acl auth_rpcgss autofs4 sunrpc
> cpufreq_ondemand acpi_cpufreq freq_table ipv6 dm_mirror dm_region_hash dm_log
> uinput sg serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support ioatdma i7core_edac
> edac_core shpchp igb dca ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif ahci
> pata_acpi ata_generic pata_jmicron radeon ttm drm_kms_helper drm i2c_algo_bit
> i2c_core dm_mod [last unloaded: microcode]
>
> Modules linked in: nfs lockd fscache nfs_acl auth_rpcgss autofs4 sunrpc
> cpufreq_ondemand acpi_cpufreq freq_table ipv6 dm_mirror dm_region_hash dm_log
> uinput sg serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support ioatdma i7core_edac
> edac_core shpchp igb dca ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif ahci
> pata_acpi ata_generic pata_jmicron radeon ttm drm_kms_helper drm i2c_algo_bit
> i2c_core dm_mod [last unloaded: microcode]
> Pid: 704, comm: kexec Not tainted 2.6.32 #1 X8DTN
> RIP: 0010:[<ffffffff81029e72>] Â[<ffffffff81029e72>] ipi_handler+0x32/0xa0
> RSP: 0000:ffff8801474a3f58 ÂEFLAGS: 00000046
> RAX: 0000000000000000 RBX: ffff880337393ea8 RCX: ffff88013ae41580
> RDX: 00000000ffffffff RSI: 0000000000000000 RDI: ffff880337393ea8
> RBP: ffff8801474a3f68 R08: 0000000061c941a6 R09: 00000000578070b9
> R10: 0000000080507210 R11: 0000000025410601 R12: 0000000000000086
> R13: 00000000ffffffff R14: ffffffff817491d0 R15: 0000000090793245
> FS: Â00007fefd5f3d700(0000) GS:ffff8801474a0000(0000) knlGS:0000000000000000
> CS: Â0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 000000000040d000 CR3: 0000000316a8e000 CR4: 00000000000006e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process kexec (pid: 704, threadinfo ffff880314caa000, task ffff880335a0b500)
> Stack:
> Âffff880147571f40 000000000000000b ffff8801474a3f98 ffffffff810a6d28
> <0> 00000000aa149910 00000000f21570f0 000000000008b495 00000000521ebd53
> <0> ffff8801474a3fa8 ffffffff8102ea57 ffff880314cabf80 ffffffff81013e53
> Call Trace:
> Â<IRQ>
> Â[<ffffffff810a6d28>] generic_smp_call_function_interrupt+0x78/0x130
> Â[<ffffffff8102ea57>] smp_call_function_interrupt+0x27/0x40
> Â[<ffffffff81013e53>] call_function_interrupt+0x13/0x20
> Â<EOI>
> Code: 0f 1f 44 00 00 48 89 fb 9c 58 0f 1f 44 00 00 49 89 c4 fa 66 0f 1f 44 00
> 00 f0 ff 0f 8b 47 04 85 c0 75 0f 66 0f 1f 44 00 00 f3 90 <8b> 43 04 85 c0 74 f7
> 8b 7b 18 83 ff ff 74 47 48 8b 05 08 25 a1
>
> Since this is a panic, I get traces from all other cpus.
>
> CPU 14 is in _write_lock_irq
> CPU Â2 is in _read_lock
> CPU Â6 has called smp_call_function() with the ipi_handler to sync mtrr's on
> the new cpu
>
> The problem is that ipi_handler does this:
>
> static void ipi_handler(void *info)
> {
> #ifdef CONFIG_SMP
> Â Â Â Âstruct set_mtrr_data *data = info;
> Â Â Â Âunsigned long flags;
>
> Â Â Â Âlocal_irq_save(flags);
>
> Â Â Â Âatomic_dec(&data->count); << global value that each processor entering
> ipi_handler decrements
> Â Â Â Âwhile (!atomic_read(&data->gate)) << 1 when data->count != 0
> Â Â Â Â Â Â Â Âcpu_relax();
>
> So what happens is that CPU 2 is in _read_lock and has acquired a lock. ÂCPU 14
> is waiting for the release of that lock with IRQs *off*.
>
> CPU 6 launches smp_call_function, and CPU 2 answers and runs the ipi_handler()
> and waits (as do all other processors).
>
> CPU 14, however, does not see the IPI because it is waiting with interrupts off
> for the lock that CPU 2 is holding.
>
> Boom. ÂDeadlock.

Hmmm.. lockdep code is supposed to be able to detect it, any lockdep
warnning before
a dead lock happens?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/