Re: Softlockup when test shmem swapout-swapin and compaction
From: Zi Yan
Date: Wed Mar 05 2025 - 13:44:50 EST
On 5 Mar 2025, at 11:27, Zi Yan wrote:
> On 26 Feb 2025, at 2:22, Baolin Wang wrote:
>
>> Add Zi.
>>
>> On 2025/2/26 15:03, Liu Shixin wrote:
>>> Hi all,
>>>
>>> I found a softlockup when testing shmem large folio swapout-swapin and compaction:
>>>
>>> watchdog: BUG: soft lockup - CPU#30 stuck for 179s! [folio_swap:4714]
>>> Modules linked in: zram xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype iptable_filter ip_tantel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl cixt4 mbcache jbd2 sr_mod cdrom ata_generic ata_piix virtio_net net_failover ghash_clmulni_intel libata sha512_ssse3
>>> CPU: 30 UID: 0 PID: 4714 Comm: folio_swap Kdump: loaded Tainted: G L 6.14.0-rc4-next-20250225+ #2
>>> Tainted: [L]=SOFTLOCKUP
>>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
>>> RIP: 0010:xas_load+0x5d/0xc0
>>> Code: 08 48 d3 ea 83 e2 3f 89 d0 48 83 c0 04 48 8b 44 c6 08 48 89 73 18 48 89 c1 83 e1 03 48 83 f9 02 75 08 48 3d
>>> RSP: 0000:ffffadf142f1ba60 EFLAGS: 00000293
>>> RAX: ffffe524cc4f6700 RBX: ffffadf142f1ba90 RCX: 0000000000000000
>>> RDX: 0000000000000011 RSI: ffff9a3e058acb68 RDI: ffffadf142f1ba90
>>> RBP: fffffffffffffffe R08: ffffadf142f1bb50 R09: 0000000000000392
>>> R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000011
>>> R13: ffffadf142f1bb48 R14: ffff9a3e04e9c588 R15: 0000000000000000
>>> FS: 00007fd957666740(0000) GS:ffff9a41ac0e5000(0000) knlGS:0000000000000000
>>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 00007fd922860000 CR3: 000000025c360001 CR4: 0000000000772ef0
>>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>> PKRU: 55555554
>>> Call Trace:
>>> <IRQ>
>>> ? watchdog_timer_fn+0x1c9/0x250
>>> ? __pfx_watchdog_timer_fn+0x10/0x10
>>> ? __hrtimer_run_queues+0x10e/0x250
>>> ? hrtimer_interrupt+0xfb/0x240
>>> ? __sysvec_apic_timer_interrupt+0x4e/0xe0
>>> ? sysvec_apic_timer_interrupt+0x68/0x90
>>> </IRQ>
>>> <TASK>
>>> ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> ? xas_load+0x5d/0xc0
>>> xas_find+0x153/0x1a0
>>> find_get_entries+0x73/0x280
>>> shmem_undo_range+0x1fc/0x640
>>> shmem_evict_inode+0x109/0x270
>>> evict+0x107/0x240
>>> ? fsnotify_destroy_marks+0x25/0x180
>>> ? _atomic_dec_and_lock+0x35/0x50
>>> __dentry_kill+0x71/0x190
>>> dput+0xd1/0x190
>>> __fput+0x128/0x2a0
>>> task_work_run+0x57/0x90
>>> syscall_exit_to_user_mode+0x1cb/0x1e0
>>> do_syscall_64+0x67/0x170
>>> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>> RIP: 0033:0x7fd95776eb8b
>>>
>>> If CONFIG_DEBUG_VM is enabled, we will meet VM_BUG_ON_FOLIO(!folio_test_locked(folio)) in
>>> shmem_add_to_page_cache() too. It seems that the problem is related to memory migration or
>>> compaction which is necessary for reproduction, although without a clear why.
>>>
>>> To reproduce the problem, we need firstly a zram device as swap backend, and then run the
>>> reproduction program. The reproduction program consists of three parts:
>>> 1. A process constantly changes the status of shmem large folio by these interfaces:
>>> /sys/kernel/mm/transparent_hugepage/hugepages-<size>/shmem_enabled
>>> 2. A process constantly echo 1 > /proc/sys/vm/compact_memory
>>> 3. A process constantly alloc/free/swapout/swapin shmem large folios.
>>>
>>> I'm not sure whether the first process is necessary but the second and third are. In addition,
>>> I tried hacking to modify compaction_alloc to return NULL, and the problem disappeared,
>>> so I guess the problem is in migration.
>>>
>>> The problem is different with https://lore.kernel.org/all/1738717785.im3r5g2vxc.none@localhost/
>>> since I have confirmed this porblem still existed after merge the fixed patch.
>>
>> Could you check if your version includes Zi's fix[1]? Not sure if it's related to the shmem large folio split.
>>
>> [1] https://lore.kernel.org/all/AF487A7A-F685-485D-8D74-756C843D6F0A@xxxxxxxxxx/
>
> Hi Baolin,
>
> With step 2 and 3 above, I hit the error below on mm-everything-2025-03-05-03-54.
> It hits VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), folio).
> mm-everything-2025-03-05-03-54 does not have my folio_split() nor xas_try_split()
> patches. Can you take a look at it?
>
> What I did:
>
> 1. run “while true; do echo 1 | sudo tee /proc/sys/vm/compact_memory >/dev/null; done” to trigger compaction;
> 2. use zswap with “echo Y | sudo tee /sys/module/zswap/parameters/enabled”;
> 3. running shmem_aligned_swapin.c you gave me with 128MB SIZE and using
> memset(shared_memory, 0xab, SIZE) at the end instead of just setting 0s
> (this should not make a difference from your original program).
>
>
> [ 37.591314] page: refcount:2 mapcount:0 mapping:000000000c53d1a6 index:0x6687 pfn:0x169420
> [ 37.592552] memcg:ffff8881781c1800
> [ 37.593046] aops:shmem_aops ino:c0f
> [ 37.593586] flags: 0x2ffff000002013d(locked|referenced|uptodate|dirty|lru|active|swapbacked|node=0|zone=2|lastcpupid=0xffff)
> [ 37.595138] raw: 02ffff000002013d ffffea00058a6b08 ffffea0005a50408 ffff88817951cd68
> [ 37.596207] raw: 0000000000006687 0000000000000000 00000002ffffffff ffff8881781c1800
> [ 37.597305] page dumped because: VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index))
> [ 37.598464] ------------[ cut here ]------------
> [ 37.599119] kernel BUG at mm/filemap.c:2134!
> [ 37.599729] Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI
> [ 37.600494] CPU: 4 UID: 1000 PID: 1922 Comm: shmem_aligned_s Not tainted 6.14.0-rc3-mm-everything-2024-01-29-07-19-no-mglru+ #49
> [ 37.602267] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
> [ 37.603557] RIP: 0010:find_lock_entries+0x352/0x360
> [ 37.604269] Code: 44 24 10 48 8b 4c 24 18 eb b5 48 c7 c6 08 bb 99 82 4c 89 ff e8 cf e6 03 00 0f 0b 48 c7 c6 70 a3 9c 82 4c 89 ff e8 be e6 03 00 <0f> 0b e8 a7 69 ca 00 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 9
> 0
> [ 37.606838] RSP: 0018:ffffc900029bba68 EFLAGS: 00010246
> [ 37.607566] RAX: 000000000000004a RBX: fffffffffffffffe RCX: 0000000000000000
> [ 37.608550] RDX: 0000000000000000 RSI: ffff8882b5d1d900 RDI: ffff8882b5d1d900
> [ 37.609580] RBP: ffffc900029bba88 R08: 00000000ffffdfff R09: ffffffff82d238c8
> [ 37.610566] R10: ffffffff82c73920 R11: 6d75642065676170 R12: ffffc900029bbb48
> [ 37.611551] R13: ffffc900029bbc48 R14: ffff88817951cd68 R15: ffffea0005a50800
> [ 37.612535] FS: 00007f40107cc740(0000) GS:ffff8882b5d00000(0000) knlGS:0000000000000000
> [ 37.613683] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 37.614483] CR2: 00007f40107bf000 CR3: 0000000101c00004 CR4: 0000000000370ef0
> [ 37.615472] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 37.616459] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 37.617481] Call Trace:
> [ 37.617838] <TASK>
> [ 37.618183] ? __die_body.cold+0x19/0x26
> [ 37.618746] ? die+0x2a/0x50
> [ 37.619150] ? do_trap+0xc6/0x110
> [ 37.619610] ? do_error_trap+0x6a/0x90
> [ 37.620126] ? find_lock_entries+0x352/0x360
> [ 37.620711] ? exc_invalid_op+0x4c/0x60
> [ 37.621354] ? find_lock_entries+0x352/0x360
> [ 37.622075] ? asm_exc_invalid_op+0x16/0x20
> [ 37.622647] ? find_lock_entries+0x352/0x360
> [ 37.623268] ? find_lock_entries+0x352/0x360
> [ 37.623946] shmem_undo_range+0xf8/0x6a0
> [ 37.624547] shmem_evict_inode+0x117/0x2b0
> [ 37.625145] evict+0x10b/0x290
> [ 37.625607] ? fsnotify_grab_connector+0x42/0x80
> [ 37.626257] ? fsnotify_destroy_marks+0x26/0x1a0
> [ 37.626903] ? do_raw_spin_unlock+0x49/0xb0
> [ 37.627489] __dentry_kill+0x71/0x1a0
> [ 37.628009] dput+0xf8/0x1c0
> [ 37.628427] __fput+0x143/0x2b0
> [ 37.628877] task_work_run+0x58/0xa0
> [ 37.629428] syscall_exit_to_user_mode+0x168/0x170
> [ 37.630099] do_syscall_64+0x57/0x110
> [ 37.630619] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 37.631324] RIP: 0033:0x7f40108dc1f7
> [ 37.631844] Code: 73 01 c3 48 8b 0d 29 9c 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f9 9b 0d 00 f7 d8 64 89 01 4
> 8
> [ 37.634424] RSP: 002b:00007ffcad4bd468 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
> [ 37.635466] RAX: 0000000000000000 RBX: 00007ffcad4bd598 RCX: 00007f40108dc1f7
> [ 37.636450] RDX: 0000000008000000 RSI: 0000000008000000 RDI: 00007f40087c0000
> [ 37.637477] RBP: 00007ffcad4bd480 R08: 0000000000000064 R09: 00000000fffffffd
> [ 37.638460] R10: 0000000000000003 R11: 0000000000000202 R12: 0000000000000000
> [ 37.639442] R13: 00007ffcad4bd5a8 R14: 00007f4010a10000 R15: 000055694d923dd8
> [ 37.640422] </TASK>
> [ 37.640750] Modules linked in:
> [ 37.641249] ---[ end trace 0000000000000000 ]---
OK, it is probably still caused by the __folio_migrate_mapping() bug, since
writing to sibling entries of a multi-index entry breaks the multi-index entry.
Thank Matthew for doing the experiments.
I will send a proper fix based on Hugh’s version[1] and cc stable.
[1] https://lore.kernel.org/linux-mm/23d65532-859a-e88f-9c24-06a6c7ff4006@xxxxxxxxxx/
Best Regards,
Yan, Zi