fs: hfsplus: use after free in
From: Sasha Levin
Date: Fri Feb 20 2015 - 05:14:58 EST
Hi all,
While fuzzing with trinity inside a KVM tools guest running the latest -next
kernel, I've stumbled on the following spew:
[ 2014.561050] BUG: KASan: use after free in memcpy+0x21/0x50 at addr ffff880ff138afee
[ 2014.561050] Read of size 4 by task trinity-main/10201
[ 2014.561050] page:ffffea003fc4e280 count:0 mapcount:0 mapping: (null) index:0x2
[ 2014.561050] flags: 0xcafffff80000000()
[ 2014.561050] page dumped because: kasan: bad access detected
[ 2014.561050] CPU: 23 PID: 10201 Comm: trinity-main Not tainted 3.19.0-next-20150219-sasha-00045-g9130270f #1939
[ 2014.561050] ffff880ff138afee 000000002e9b0643 ffff8803023cf0f8 ffffffffa2b40d3a
[ 2014.561050] 1ffffd4007f89c57 ffff8803023cf188 ffff8803023cf178 ffffffff987648f4
[ 2014.561050] ffff8803023c0d52 ffff8803023c0000 0000000000000282 ffff8803023c0ce0
[ 2014.561050] Call Trace:
[ 2014.561050] dump_stack (lib/dump_stack.c:52)
[ 2014.561050] kasan_report_error (mm/kasan/report.c:132 mm/kasan/report.c:193)
[ 2014.561050] kasan_report (mm/kasan/report.c:230)
[ 2014.561050] ? memcpy (mm/kasan/kasan.c:283)
[ 2014.561050] __asan_loadN (mm/kasan/kasan.c:477)
[ 2014.561050] ? __might_sleep (kernel/sched/core.c:7356 (discriminator 14))
[ 2014.561050] memcpy (mm/kasan/kasan.c:283)
[ 2014.561050] hfsplus_bnode_read (fs/hfsplus/bnode.c:34)
[ 2014.561050] hfsplus_brec_lenoff (fs/hfsplus/brec.c:26)
[ 2014.561050] ? __lock_is_held (kernel/locking/lockdep.c:3518)
[ 2014.561050] ? hfs_btree_inc_height (fs/hfsplus/brec.c:20)
[ 2014.561050] ? hfsplus_brec_remove (fs/hfsplus/bfind.c:96)
[ 2014.561050] __hfsplus_brec_find (fs/hfsplus/bfind.c:130)
[ 2014.561050] ? hfs_find_1st_rec_by_cnid (fs/hfsplus/bfind.c:115)
[ 2014.561050] ? hfsplus_bnode_find (./arch/x86/include/asm/bitops.h:311 fs/hfsplus/bnode.c:494)
[ 2014.561050] ? _atomic_dec_and_lock (./arch/x86/include/asm/atomic.h:118 lib/dec_and_lock.c:28)
[ 2014.561050] ? hfsplus_bnode_put (fs/hfsplus/bnode.c:483)
[ 2014.561050] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:77 include/linux/spinlock_api_smp.h:154 kernel/locking/spinlock.c:183)
[ 2014.561050] ? hfsplus_bnode_put (fs/hfsplus/bnode.c:657)
[ 2014.561050] hfsplus_brec_find (fs/hfsplus/bfind.c:196)
[ 2014.561050] ? hfsplus_brec_remove (fs/hfsplus/bfind.c:96)
[ 2014.561050] ? __hfsplus_brec_find (fs/hfsplus/bfind.c:166)
[ 2014.561050] ? kasan_kmalloc (mm/kasan/kasan.c:354)
[ 2014.561050] ? __kmalloc (mm/slub.c:3325)
[ 2014.561050] ? each_symbol_section (kernel/module.c:3810)
[ 2014.561050] hfsplus_brec_read (fs/hfsplus/bfind.c:224)
[ 2014.561050] hfsplus_lookup (fs/hfsplus/dir.c:53)
[ 2014.561050] ? hfsplus_link (fs/hfsplus/dir.c:32)
[ 2014.561050] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:77 include/linux/spinlock_api_smp.h:154 kernel/locking/spinlock.c:183)
[ 2014.561050] ? __lock_acquire (kernel/locking/lockdep.c:2019 kernel/locking/lockdep.c:3184)
[ 2014.561050] ? __d_alloc (fs/dcache.c:1525)
[ 2014.561050] ? debug_check_no_locks_freed (kernel/locking/lockdep.c:3051)
[ 2014.561050] ? __slab_alloc (mm/slub.c:2413 (discriminator 2))
[ 2014.561050] ? mark_held_locks (kernel/locking/lockdep.c:2525)
[ 2014.561050] ? lockdep_init_map (kernel/locking/lockdep.c:2986)
[ 2014.561050] ? d_alloc (fs/dcache.c:769 fs/dcache.c:1601)
[ 2014.561050] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:77 include/linux/spinlock_api_smp.h:154 kernel/locking/spinlock.c:183)
[ 2014.561050] ? d_alloc (fs/dcache.c:1607)
[ 2014.561050] ? vfs_rename (fs/namei.c:1405)
[ 2014.561050] lookup_real (fs/namei.c:1377)
[ 2014.561050] do_last (fs/namei.c:2875 fs/namei.c:2987)
[ 2014.561050] ? complete_walk (fs/namei.c:1775)
[ 2014.561050] ? __slab_alloc (mm/slub.c:2413 (discriminator 2))
[ 2014.561050] ? path_init (fs/namei.c:2921)
[ 2014.561050] ? path_init (fs/namei.c:1953)
[ 2014.561050] ? path_init (fs/namei.c:1933)
[ 2014.561050] ? __mutex_init (kernel/locking/mutex.c:61)
[ 2014.561050] path_openat (fs/namei.c:3236)
[ 2014.561050] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:77 include/linux/spinlock_api_smp.h:154 kernel/locking/spinlock.c:183)
[ 2014.561050] ? filename_create (fs/namei.c:3215)
[ 2014.561050] ? getname_flags (fs/namei.c:136)
[ 2014.561050] ? set_track (mm/slub.c:530)
[ 2014.561050] ? __slab_alloc (mm/slub.c:2413 (discriminator 2))
[ 2014.561050] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2554 kernel/locking/lockdep.c:2601)
[ 2014.561050] do_filp_open (fs/namei.c:3283)
[ 2014.561050] ? user_path_mountpoint_at (fs/namei.c:3277)
[ 2014.561050] ? __alloc_fd (fs/file.c:501)
[ 2014.561050] do_sys_open (fs/open.c:1013)
[ 2014.561050] ? filp_open (fs/open.c:999)
[ 2014.561050] ? syscall_trace_enter_phase2 (arch/x86/kernel/ptrace.c:1598)
[ 2014.561050] SyS_openat (fs/open.c:1034)
[ 2014.561050] tracesys_phase2 (arch/x86/kernel/entry_64.S:422)
[ 2014.561050] Memory state around the buggy address:
[ 2014.561050] ffff880ff138ae80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 2014.561050] ffff880ff138af00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 2014.561050] >ffff880ff138af80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 2014.561050] ^
[ 2014.561050] ffff880ff138b000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 2014.561050] ffff880ff138b080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/