[Bug] KASAN: slab-use-after-free Read in dtInsertEntry
From: Sam Sun
Date: Fri Feb 13 2026 - 10:45:12 EST
Dear developers and maintainers,
We encountered a KASAN bug that crashes in the JFS directory tree
insertion path during fuzzing using a modified syzkaller. The bug was
found on kernel v6.19, and the bug report is listed below.
==================================================================
BUG: KASAN: slab-use-after-free in dtInsertEntry.isra.0+0x12dd/0x15f0
fs/jfs/jfs_dtree.c:3701
Read of size 1 at addr ff11000124004580 by task syz.0.1269/28159
CPU: 1 UID: 0 PID: 28159 Comm: syz.0.1269 Tainted: G L
6.19.0-01452-g72c395024dac-dirty #8 PREEMPT(full)
Tainted: [L]=SOFTLOCKUP
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x116/0x1b0 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xca/0x5f0 mm/kasan/report.c:482
kasan_report+0xca/0x100 mm/kasan/report.c:595
dtInsertEntry.isra.0+0x12dd/0x15f0 fs/jfs/jfs_dtree.c:3701
dtInsert+0x49b/0xad0 fs/jfs/jfs_dtree.c:894
jfs_create+0x609/0xb30 fs/jfs/namei.c:138
lookup_open.isra.0+0xc12/0x1030 fs/namei.c:4483
open_last_lookups fs/namei.c:4583 [inline]
path_openat+0xe97/0x2cf0 fs/namei.c:4827
do_file_open+0x216/0x470 fs/namei.c:4859
do_sys_openat2+0xe6/0x250 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x13f/0x1f0 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xcb/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f54f93b145d
Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48
89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d
01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f54fa23ef98 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f54f9655fe0 RCX: 00007f54f93b145d
RDX: 000000000000275a RSI: 00002000000010c0 RDI: ffffffffffffff9c
RBP: 000000000000000b R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000004f5000c
R13: 00007f54f9656078 R14: 00007f54f9655fe0 R15: 00007f54fa21f000
</TASK>
Allocated by task 26041:
kasan_save_stack+0x24/0x50 mm/kasan/common.c:57
kasan_save_track+0x14/0x30 mm/kasan/common.c:78
unpoison_slab_object mm/kasan/common.c:340 [inline]
__kasan_slab_alloc+0x87/0x90 mm/kasan/common.c:366
kasan_slab_alloc include/linux/kasan.h:253 [inline]
slab_post_alloc_hook mm/slub.c:4953 [inline]
slab_alloc_node mm/slub.c:5263 [inline]
kmem_cache_alloc_lru_noprof+0x26b/0x790 mm/slub.c:5282
jfs_alloc_inode+0x27/0x60 fs/jfs/super.c:105
alloc_inode+0x68/0x250 fs/inode.c:346
new_inode+0x22/0x1d0 fs/inode.c:1176
diReadSpecial+0x53/0x730 fs/jfs/jfs_imap.c:426
jfs_mount+0xe5/0x8b0 fs/jfs/jfs_mount.c:87
jfs_fill_super+0x820/0x1030 fs/jfs/super.c:523
get_tree_bdev_flags+0x389/0x620 fs/super.c:1694
vfs_get_tree+0x93/0x340 fs/super.c:1754
fc_mount+0x1a/0x220 fs/namespace.c:1193
do_new_mount_fc fs/namespace.c:3760 [inline]
do_new_mount fs/namespace.c:3836 [inline]
path_mount+0x76e/0x20a0 fs/namespace.c:4146
do_mount fs/namespace.c:4159 [inline]
__do_sys_mount fs/namespace.c:4348 [inline]
__se_sys_mount fs/namespace.c:4325 [inline]
__x64_sys_mount+0x293/0x310 fs/namespace.c:4325
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xcb/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 23:
kasan_save_stack+0x24/0x50 mm/kasan/common.c:57
kasan_save_track+0x14/0x30 mm/kasan/common.c:78
kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:584
poison_slab_object mm/kasan/common.c:253 [inline]
__kasan_slab_free+0x61/0x80 mm/kasan/common.c:285
kasan_slab_free include/linux/kasan.h:235 [inline]
slab_free_hook mm/slub.c:2540 [inline]
slab_free mm/slub.c:6674 [inline]
kmem_cache_free+0x154/0x760 mm/slub.c:6789
i_callback+0x46/0x70 fs/inode.c:325
rcu_do_batch kernel/rcu/tree.c:2617 [inline]
rcu_core+0x59e/0x1130 kernel/rcu/tree.c:2869
handle_softirqs+0x1d4/0x8e0 kernel/softirq.c:622
run_ksoftirqd kernel/softirq.c:1063 [inline]
run_ksoftirqd+0x3a/0x60 kernel/softirq.c:1055
smpboot_thread_fn+0x3d4/0xaa0 kernel/smpboot.c:160
kthread+0x38d/0x4a0 kernel/kthread.c:467
ret_from_fork+0x966/0xaf0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
Last potentially related work creation:
kasan_save_stack+0x24/0x50 mm/kasan/common.c:57
kasan_record_aux_stack+0xa7/0xc0 mm/kasan/generic.c:556
__call_rcu_common.constprop.0+0xa4/0xa00 kernel/rcu/tree.c:3131
destroy_inode+0x12c/0x1b0 fs/inode.c:401
evict+0x574/0xa90 fs/inode.c:861
iput_final fs/inode.c:1957 [inline]
iput.part.0+0x5bb/0xf50 fs/inode.c:2006
iput+0x35/0x40 fs/inode.c:1972
diFreeSpecial+0x7b/0x110 fs/jfs/jfs_imap.c:552
jfs_umount+0x213/0x440 fs/jfs/jfs_umount.c:81
jfs_put_super+0x85/0x1d0 fs/jfs/super.c:194
generic_shutdown_super+0x15e/0x390 fs/super.c:646
kill_block_super+0x3b/0x90 fs/super.c:1725
deactivate_locked_super+0xbf/0x1a0 fs/super.c:476
deactivate_super fs/super.c:509 [inline]
deactivate_super+0xb1/0xd0 fs/super.c:505
cleanup_mnt+0x2df/0x430 fs/namespace.c:1312
task_work_run+0x16b/0x260 kernel/task_work.c:233
resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
__exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
exit_to_user_mode_loop+0x11e/0x520 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x4ec/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
The buggy address belongs to the object at ff11000124004018
which belongs to the cache jfs_ip of size 2216
The buggy address is located 1384 bytes inside of
freed 2216-byte region [ff11000124004018, ff110001240048c0)
The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000
index:0xff11000124000000 pfn:0x124000
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
memcg:ff11000107f4cc01
flags: 0x57ff00000000040(head|node=1|zone=2|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 057ff00000000040 ff1100001df57b40 dead000000000122 0000000000000000
raw: ff11000124000000 00000000800d000c 00000000f5000000 ff11000107f4cc01
head: 057ff00000000040 ff1100001df57b40 dead000000000122 0000000000000000
head: ff11000124000000 00000000800d000c 00000000f5000000 ff11000107f4cc01
head: 057ff00000000003 ffd4000004900001 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Reclaimable, gfp_mask
0xd2050(__GFP_RECLAIMABLE|__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC),
pid 10963, tgid 10962 (syz.2.11), ts 59603361833, free_ts 59331310638
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x1ca/0x240 mm/page_alloc.c:1884
prep_new_page mm/page_alloc.c:1892 [inline]
get_page_from_freelist+0xdb3/0x2a70 mm/page_alloc.c:3945
__alloc_frozen_pages_noprof+0x256/0x20f0 mm/page_alloc.c:5240
alloc_pages_mpol+0x1f1/0x550 mm/mempolicy.c:2486
alloc_slab_page mm/slub.c:3075 [inline]
allocate_slab mm/slub.c:3248 [inline]
new_slab+0x2d0/0x440 mm/slub.c:3302
___slab_alloc+0xdd8/0x1bc0 mm/slub.c:4656
__slab_alloc.constprop.0+0x66/0x110 mm/slub.c:4779
__slab_alloc_node mm/slub.c:4855 [inline]
slab_alloc_node mm/slub.c:5251 [inline]
kmem_cache_alloc_lru_noprof+0x4be/0x790 mm/slub.c:5282
jfs_alloc_inode+0x27/0x60 fs/jfs/super.c:105
alloc_inode+0x68/0x250 fs/inode.c:346
new_inode+0x22/0x1d0 fs/inode.c:1176
jfs_fill_super+0x6ab/0x1030 fs/jfs/super.c:511
get_tree_bdev_flags+0x389/0x620 fs/super.c:1694
vfs_get_tree+0x93/0x340 fs/super.c:1754
fc_mount+0x1a/0x220 fs/namespace.c:1193
do_new_mount_fc fs/namespace.c:3760 [inline]
do_new_mount fs/namespace.c:3836 [inline]
path_mount+0x76e/0x20a0 fs/namespace.c:4146
page last free pid 11012 tgid 11012 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1433 [inline]
__free_frozen_pages+0x83e/0x1130 mm/page_alloc.c:2973
discard_slab mm/slub.c:3346 [inline]
__put_partials+0x132/0x170 mm/slub.c:3886
qlink_free mm/kasan/quarantine.c:163 [inline]
qlist_free_all+0x4e/0xf0 mm/kasan/quarantine.c:179
kasan_quarantine_reduce+0x195/0x1e0 mm/kasan/quarantine.c:286
__kasan_slab_alloc+0x67/0x90 mm/kasan/common.c:350
kasan_slab_alloc include/linux/kasan.h:253 [inline]
slab_post_alloc_hook mm/slub.c:4953 [inline]
slab_alloc_node mm/slub.c:5263 [inline]
kmem_cache_alloc_lru_noprof+0x26b/0x790 mm/slub.c:5282
shmem_alloc_inode+0x27/0x50 mm/shmem.c:5181
alloc_inode+0x68/0x250 fs/inode.c:346
new_inode+0x22/0x1d0 fs/inode.c:1176
__shmem_get_inode mm/shmem.c:3097 [inline]
shmem_get_inode+0x19c/0x1000 mm/shmem.c:3171
shmem_symlink+0xec/0x8f0 mm/shmem.c:4132
vfs_symlink fs/namei.c:5615 [inline]
vfs_symlink+0x180/0x4e0 fs/namei.c:5594
filename_symlinkat+0x363/0x490 fs/namei.c:5640
__do_sys_symlinkat fs/namei.c:5660 [inline]
__se_sys_symlinkat fs/namei.c:5655 [inline]
__x64_sys_symlinkat+0xa4/0x140 fs/namei.c:5655
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xcb/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Memory state around the buggy address:
ff11000124004480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ff11000124004500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ff11000124004580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ff11000124004600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ff11000124004680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
<<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>>
raw: 057ff00000000040 ff1100001df57b40 dead000000000122 0000000000000000
raw: ff11000124000000 00000000800d000c 00000000f5000000 ff11000107f4cc01
head: 057ff00000000040 ff1100001df57b40 dead000000000122 0000000000000000
head: ff11000124000000 00000000800d000c 00000000f5000000 ff11000107f4cc01
head: 057ff00000000003 ffd4000004900001 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 3, migratetype Reclaimable, gfp_mask
0xd2050(__GFP_RECLAIMABLE|__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC),
pid 10963, tgid 10962 (syz.2.11), ts 59603361833, free_ts 59331310638
post_alloc_hook+0x1ca/0x240
get_page_from_freelist+0xdb3/0x2a70
__alloc_frozen_pages_noprof+0x256/0x20f0
alloc_pages_mpol+0x1f1/0x550
new_slab+0x2d0/0x440
___slab_alloc+0xdd8/0x1bc0
__slab_alloc.constprop.0+0x66/0x110
kmem_cache_alloc_lru_noprof+0x4be/0x790
jfs_alloc_inode+0x27/0x60
alloc_inode+0x68/0x250
new_inode+0x22/0x1d0
jfs_fill_super+0x6ab/0x1030
get_tree_bdev_flags+0x389/0x620
vfs_get_tree+0x93/0x340
fc_mount+0x1a/0x220
path_mount+0x76e/0x20a0
page last free pid 11012 tgid 11012 stack trace:
__free_frozen_pages+0x83e/0x1130
__put_partials+0x132/0x170
qlist_free_all+0x4e/0xf0
kasan_quarantine_reduce+0x195/0x1e0
__kasan_slab_alloc+0x67/0x90
kmem_cache_alloc_lru_noprof+0x26b/0x790
shmem_alloc_inode+0x27/0x50
alloc_inode+0x68/0x250
new_inode+0x22/0x1d0
shmem_get_inode+0x19c/0x1000
shmem_symlink+0xec/0x8f0
vfs_symlink+0x180/0x4e0
filename_symlinkat+0x363/0x490
__x64_sys_symlinkat+0xa4/0x140
do_syscall_64+0xcb/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Memory state around the buggy address:
ff11000124004480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ff11000124004500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ff11000124004580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ff11000124004600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ff11000124004680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
Unfortunately, no reproducer is available currently. If you have any
questions, please let me know.
Best regards,
Yue