Re: [syzbot] [bpf?] possible deadlock in get_page_from_freelist

From: Hou Tao
Date: Mon May 20 2024 - 07:31:06 EST


Hi,

On 4/15/2024 10:28 AM, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 7efd0a74039f Merge tag 'ata-6.9-rc4' of git://git.kernel.o..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1358aeed180000
> kernel config: https://syzkaller.appspot.com/x/.config?x=285be8dd6baeb438
> dashboard link: https://syzkaller.appspot.com/bug?extid=a7f061d2d16154538c58
> compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-7efd0a74.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/39eb4e17e7f0/vmlinux-7efd0a74.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/b9a08c36e0ca/bzImage-7efd0a74.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+a7f061d2d16154538c58@xxxxxxxxxxxxxxxxxxxxxxxxx
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.9.0-rc3-syzkaller-00355-g7efd0a74039f #0 Not tainted
> ------------------------------------------------------
> syz-executor.2/7645 is trying to acquire lock:
> ffff88807ffd7d58 (&zone->lock){-.-.}-{2:2}, at: rmqueue_buddy mm/page_alloc.c:2730 [inline]
> ffff88807ffd7d58 (&zone->lock){-.-.}-{2:2}, at: rmqueue mm/page_alloc.c:2911 [inline]
> ffff88807ffd7d58 (&zone->lock){-.-.}-{2:2}, at: get_page_from_freelist+0x4b9/0x3780 mm/page_alloc.c:3314
>
> but task is already holding lock:
> ffff88802c8739f8 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc8/0xdd0 kernel/bpf/lpm_trie.c:324
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (&trie->lock){-.-.}-{2:2}:
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
> trie_delete_elem+0xb0/0x7e0 kernel/bpf/lpm_trie.c:451
> ___bpf_prog_run+0x3e51/0xabd0 kernel/bpf/core.c:1997
> __bpf_prog_run32+0xc1/0x100 kernel/bpf/core.c:2236
> bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
> __bpf_prog_run include/linux/filter.h:657 [inline]
> bpf_prog_run include/linux/filter.h:664 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> bpf_trace_run2+0x151/0x420 kernel/trace/bpf_trace.c:2420
> __bpf_trace_contention_end+0xca/0x110 include/trace/events/lock.h:122
> trace_contention_end.constprop.0+0xea/0x170 include/trace/events/lock.h:122
> __pv_queued_spin_lock_slowpath+0x266/0xc80 kernel/locking/qspinlock.c:560
> pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
> queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> do_raw_spin_lock+0x210/0x2c0 kernel/locking/spinlock_debug.c:116
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> _raw_spin_lock_irqsave+0x42/0x60 kernel/locking/spinlock.c:162
> rmqueue_bulk mm/page_alloc.c:2131 [inline]
> __rmqueue_pcplist+0x5a8/0x1b00 mm/page_alloc.c:2826
> rmqueue_pcplist mm/page_alloc.c:2868 [inline]
> rmqueue mm/page_alloc.c:2905 [inline]
> get_page_from_freelist+0xbaa/0x3780 mm/page_alloc.c:3314
> __alloc_pages+0x22b/0x2460 mm/page_alloc.c:4575
> __alloc_pages_node include/linux/gfp.h:238 [inline]
> alloc_pages_node include/linux/gfp.h:261 [inline]
> alloc_slab_page mm/slub.c:2175 [inline]
> allocate_slab mm/slub.c:2338 [inline]
> new_slab+0xcc/0x3a0 mm/slub.c:2391
> ___slab_alloc+0x66d/0x1790 mm/slub.c:3525
> __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3610
> __slab_alloc_node mm/slub.c:3663 [inline]
> slab_alloc_node mm/slub.c:3835 [inline]
> __do_kmalloc_node mm/slub.c:3965 [inline]
> __kmalloc_node_track_caller+0x367/0x470 mm/slub.c:3986
> kmalloc_reserve+0xef/0x2c0 net/core/skbuff.c:599
> __alloc_skb+0x164/0x380 net/core/skbuff.c:668
> alloc_skb include/linux/skbuff.h:1313 [inline]
> nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
> nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
> nsim_dev_trap_report_work+0x2a4/0xc80 drivers/net/netdevsim/dev.c:850
> process_one_work+0x9a9/0x1ac0 kernel/workqueue.c:3254
> process_scheduled_works kernel/workqueue.c:3335 [inline]
> worker_thread+0x6c8/0xf70 kernel/workqueue.c:3416
> kthread+0x2c1/0x3a0 kernel/kthread.c:388
> ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> -> #0 (&zone->lock){-.-.}-{2:2}:
> check_prev_add kernel/locking/lockdep.c:3134 [inline]
> check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> validate_chain kernel/locking/lockdep.c:3869 [inline]
> __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
> lock_acquire kernel/locking/lockdep.c:5754 [inline]
> lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
> rmqueue_buddy mm/page_alloc.c:2730 [inline]
> rmqueue mm/page_alloc.c:2911 [inline]
> get_page_from_freelist+0x4b9/0x3780 mm/page_alloc.c:3314
> __alloc_pages+0x22b/0x2460 mm/page_alloc.c:4575
> __alloc_pages_node include/linux/gfp.h:238 [inline]
> alloc_pages_node include/linux/gfp.h:261 [inline]
> __kmalloc_large_node+0x7f/0x1a0 mm/slub.c:3911
> __do_kmalloc_node mm/slub.c:3954 [inline]
> __kmalloc_node.cold+0x5/0x5f mm/slub.c:3973
> kmalloc_node include/linux/slab.h:648 [inline]
> bpf_map_kmalloc_node+0x98/0x4a0 kernel/bpf/syscall.c:422
> lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
> trie_update_elem+0x1ef/0xdd0 kernel/bpf/lpm_trie.c:333
> bpf_map_update_value+0x2c1/0x6c0 kernel/bpf/syscall.c:203
> map_update_elem+0x623/0x910 kernel/bpf/syscall.c:1641
> __sys_bpf+0xab9/0x4b40 kernel/bpf/syscall.c:5648
> __do_sys_bpf kernel/bpf/syscall.c:5767 [inline]
> __se_sys_bpf kernel/bpf/syscall.c:5765 [inline]
> __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5765
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xcf/0x260 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(&trie->lock);
> lock(&zone->lock);
> lock(&trie->lock);
> lock(&zone->lock);
>
> *** DEADLOCK ***
>
> 2 locks held by syz-executor.2/7645:
> #0: ffffffff8d7b0e20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
> #0: ffffffff8d7b0e20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
> #0: ffffffff8d7b0e20 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x24b/0x6c0 kernel/bpf/syscall.c:202
> #1: ffff88802c8739f8 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc8/0xdd0 kernel/bpf/lpm_trie.c:324

The normal lock sequence is trie->lock and then zone->lock, the syzbot
constructs a reversed lock sequence by attaching a bpf program to
trace_contention_end() on zone->lock and calls trie_delete_elem() in the
bpf program, so the dead-lock is indeed possible.

There are two feasible ways to fix the problem:
1) switch from bpf_map_kmalloc_node()/kfree to bpf memory allocator for
lpm trie
2) add a dead-lock checking map just like hash-table does and also need
to make lockdep be happy with the try-lock mechanism.

I prefer 1), but it could not eliminate the dead-lock completely,
because the syzbot may construct a bpf program which invokes
trie_delete_elem(), attach to  trace_contention_end() on trie->lock and
leads to a dead-lock, so will fix the syzbot report by 2).