Re: + shmem-fix-faulting-into-a-hole-while-its-punched-take-2.patch added to -mm tree

From: Vlastimil Babka
Date: Wed Jul 09 2014 - 12:35:34 EST


On 07/09/2014 06:03 PM, Sasha Levin wrote:
On 07/09/2014 08:47 AM, Sasha Levin wrote:
So it would again help to see stacks of other tasks, to see who holds the i_mutex and where it's stuck...
The stacks print got garbled due to having large amount of tasks and too low of a
console buffer. I've fixed that and will update when (if) the problem reproduces.

Okay, so the issue reproduces on today's -next as well, and here's my analysis.

Hung task timer was triggered for trinity-c37:

[ 483.991095] INFO: task trinity-c37:8968 blocked for more than 120 seconds.
[ 483.995898] Not tainted 3.16.0-rc4-next-20140709-sasha-00024-gd22103d-dirty #775
[ 484.000405] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 484.004961] trinity-c37 D 00000000ffffffff 13160 8968 8558 0x10000000
[ 484.009035] ffff8800c0457ce8 0000000000000006 ffffffff9ac1d920 0000000000000001
[ 484.012654] ffff8800c0457fd8 00000000001e2740 00000000001e2740 00000000001e2740
[ 484.015716] ffff880201610000 ffff8800c0bc3000 ffff8800c0457cd8 ffff880223115bb0
[ 484.050723] Call Trace:
[ 484.051831] schedule (kernel/sched/core.c:2841)
[ 484.053683] schedule_preempt_disabled (kernel/sched/core.c:2868)
[ 484.055979] mutex_lock_nested (kernel/locking/mutex.c:532 kernel/locking/mutex.c:584)
[ 484.058175] ? shmem_fallocate (mm/shmem.c:1760)
[ 484.060441] ? get_parent_ip (kernel/sched/core.c:2555)
[ 484.063899] ? shmem_fallocate (mm/shmem.c:1760)
[ 484.067485] shmem_fallocate (mm/shmem.c:1760)
[ 484.071113] ? put_lock_stats.isra.12 (./arch/x86/include/asm/preempt.h:98 kernel/locking/lockdep.c:254)
[ 484.075128] do_fallocate (include/linux/fs.h:1281 fs/open.c:299)
[ 484.078566] SyS_madvise (mm/madvise.c:332 mm/madvise.c:381 mm/madvise.c:531 mm/madvise.c:462)
[ 484.082074] tracesys (arch/x86/kernel/entry_64.S:542)
[ 484.150284] 2 locks held by trinity-c37/8968:
[ 484.152995] #0: (sb_writers#16){.+.+.+}, at: do_fallocate (fs/open.c:298)
[ 484.158692] #1: (&sb->s_type->i_mutex_key#18){+.+.+.}, at: shmem_fallocate (mm/shmem.c:1760)

It has acquired sb_writers lock ("sb_start_write(inode->i_sb);") in do_fallocate and
is blocking on an attempt to acquire i_mutex in shmem_fallocate():

if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
return -EOPNOTSUPP;

mutex_lock(&inode->i_mutex); <=== HERE

if (mode & FALLOC_FL_PUNCH_HOLE) {

Now, looking into what actually holds i_mutex rather than waiting on it we see trinity-c507:

[ 488.014804] trinity-c507 D 0000000000000002 13112 9445 8558 0x10000002
[ 488.014860] ffff88010391fab8 0000000000000002 ffffffff9abff6b0 0000000000000002
[ 488.014894] ffff88010391ffd8 00000000001e2740 00000000001e2740 00000000001e2740
[ 488.014912] ffff8800be3f8000 ffff88010389b000 ffff88010391faa8 ffff880223115d58
[ 488.014942] Call Trace:
[ 488.014948] schedule (kernel/sched/core.c:2841)
[ 488.014960] schedule_preempt_disabled (kernel/sched/core.c:2868)
[ 488.014970] mutex_lock_nested (kernel/locking/mutex.c:532 kernel/locking/mutex.c:584)
[ 488.014983] ? unmap_mapping_range (mm/memory.c:2392)
[ 488.014992] ? unmap_mapping_range (mm/memory.c:2392)
[ 488.015005] unmap_mapping_range (mm/memory.c:2392)
[ 488.015021] truncate_inode_page (mm/truncate.c:136 mm/truncate.c:180)
[ 488.015041] shmem_undo_range (mm/shmem.c:441)
[ 488.015059] shmem_truncate_range (mm/shmem.c:537)
[ 488.015069] shmem_fallocate (mm/shmem.c:1771)
[ 488.015079] ? put_lock_stats.isra.12 (./arch/x86/include/asm/preempt.h:98 kernel/locking/lockdep.c:254)
[ 488.015098] do_fallocate (include/linux/fs.h:1281 fs/open.c:299)
[ 488.015110] SyS_madvise (mm/madvise.c:332 mm/madvise.c:381 mm/madvise.c:531 mm/madvise.c:462)
[ 488.015131] tracesys (arch/x86/kernel/entry_64.S:542)

It has acquired i_mutex lock in shmem_fallocate() and is now blocking on i_mmap_mutex
in unmap_mapping_range():

details.check_mapping = even_cows? NULL: mapping;
details.nonlinear_vma = NULL;
details.first_index = hba;
details.last_index = hba + hlen - 1;
if (details.last_index < details.first_index)
details.last_index = ULONG_MAX;


mutex_lock(&mapping->i_mmap_mutex); <==== HERE
if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap)))
unmap_mapping_range_tree(&mapping->i_mmap, &details);

The only process that actually holds a i_mmap_mutex (instead of just spinning on it)
is trinity-c402:

[ 487.925991] trinity-c402 R running task 13160 9339 8558 0x10000000
[ 487.926007] ffff8800b7eb7b88 0000000000000002 ffff88006efe3290 0000000000000282
[ 487.926013] ffff8800b7eb7fd8 00000000001e2740 00000000001e2740 00000000001e2740
[ 487.926022] ffff8800362b3000 ffff8800b7eb8000 ffff8800b7eb7b88 ffff8800b7eb7fd8
[ 487.926028] Call Trace:
[ 487.926030] preempt_schedule (./arch/x86/include/asm/preempt.h:80 kernel/sched/core.c:2889)
[ 487.926034] ___preempt_schedule (arch/x86/kernel/preempt.S:11)
[ 487.926039] ? zap_pte_range (mm/memory.c:1218)
[ 487.926042] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:98 include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:183)
[ 487.926045] ? _raw_spin_unlock (include/linux/spinlock_api_smp.h:152 kernel/locking/spinlock.c:183)
[ 487.926049] zap_pte_range (mm/memory.c:1218)
[ 487.926056] unmap_single_vma (mm/memory.c:1256 mm/memory.c:1277 mm/memory.c:1301 mm/memory.c:1346)
[ 487.926060] unmap_vmas (mm/memory.c:1375 (discriminator 1))
[ 487.926066] exit_mmap (mm/mmap.c:2802)
[ 487.926069] ? preempt_count_sub (kernel/sched/core.c:2611)
[ 487.926075] mmput (kernel/fork.c:638)
[ 487.926079] do_exit (kernel/exit.c:744)
[ 487.926086] do_group_exit (kernel/exit.c:884)
[ 487.926091] SyS_exit_group (kernel/exit.c:895)
[ 487.926095] tracesys (arch/x86/kernel/entry_64.S:542)

We can see that it's not blocked since it's in the middle of a spinlock unlock
call, and we can guess it's been in that function for a while because of the hung
task timer, and other processes waiting on that i_mmap_mutex:

Hm, zap_pte_range has potentially an endless loop due to the 'goto again' path. Could it be a somewhat similar situation to the fallocate problem, but where parallel faulters on shared memory are preventing a process from exiting? Although they don't fault the pages into the same address space, they could maybe somehow interact through the TLB flushing code? And only after fixing the original problem we can observe this one?


[ 487.857145] trinity-c338 D 0000000000000008 12904 9274 8558 0x10000004
[ 487.857179] ffff8800b9bcbaa8 0000000000000002 ffffffff9abff6b0 0000000000000000
[ 487.857193] ffff8800b9bcbfd8 00000000001e2740 00000000001e2740 00000000001e2740
[ 487.857202] ffff880107198000 ffff8800b9bc0000 ffff8800b9bcba98 ffff8801ec17f090
[ 487.857209] Call Trace:
[ 487.857210] schedule (kernel/sched/core.c:2841)
[ 487.857222] schedule_preempt_disabled (kernel/sched/core.c:2868)
[ 487.857228] mutex_lock_nested (kernel/locking/mutex.c:532 kernel/locking/mutex.c:584)
[ 487.857237] ? unlink_file_vma (mm/mmap.c:245)
[ 487.857241] ? unlink_file_vma (mm/mmap.c:245)
[ 487.857251] unlink_file_vma (mm/mmap.c:245)
[ 487.857261] free_pgtables (mm/memory.c:540)
[ 487.857275] exit_mmap (mm/mmap.c:2803)
[ 487.857284] ? preempt_count_sub (kernel/sched/core.c:2611)
[ 487.857291] mmput (kernel/fork.c:638)
[ 487.857305] do_exit (kernel/exit.c:744)
[ 487.857317] ? get_signal_to_deliver (kernel/signal.c:2333)
[ 487.857327] ? debug_smp_processor_id (lib/smp_processor_id.c:57)
[ 487.857339] ? put_lock_stats.isra.12 (./arch/x86/include/asm/preempt.h:98 kernel/locking/lockdep.c:254)
[ 487.857351] ? _raw_spin_unlock_irq (./arch/x86/include/asm/paravirt.h:819 include/linux/spinlock_api_smp.h:168 kernel/locking/spinlock.c:199)
[ 487.857362] do_group_exit (kernel/exit.c:884)
[ 487.857376] get_signal_to_deliver (kernel/signal.c:2351)
[ 487.857392] do_signal (arch/x86/kernel/signal.c:698)
[ 487.857399] ? put_lock_stats.isra.12 (./arch/x86/include/asm/preempt.h:98 kernel/locking/lockdep.c:254)
[ 487.857411] ? vtime_account_user (kernel/sched/cputime.c:687)
[ 487.857418] ? preempt_count_sub (kernel/sched/core.c:2611)
[ 487.857431] ? context_tracking_user_exit (./arch/x86/include/asm/paravirt.h:809 (discriminator 2) kernel/context_tracking.c:184 (discriminator 2))
[ 487.857441] ? __this_cpu_preempt_check (lib/smp_processor_id.c:63)
[ 487.857451] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2557 kernel/locking/lockdep.c:2599)
[ 487.857464] do_notify_resume (arch/x86/kernel/signal.c:751)
[ 487.857479] int_signal (arch/x86/kernel/entry_64.S:600)

Hope that helps. Full log attached for reference.


Thanks,
Sasha


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/