Re: porting kcov to android

From: Baozeng Ding
Date: Wed Jul 06 2016 - 01:08:50 EST



+ attachment for the patch.
On 2016/7/6 12:57, Baozeng wrote:
> Hello all,
> I backported KASAN to 3.10.102 stable kerenl (ca1199fccf14540e86f6da955333e31d6fec5f3e), based on Andrey Ryabinin's work (backport KASAN to RHEL7-based (3.10 based) OpenVZ kernel). I met the following kernel panic when starting the kernel using the following command:
>
> qemu-system-x86_64 -hda ./wheezy.img -snapshot -m 2048 -net nic -net user,host=10.0.2.10,hostfwd=tcp::51727-:22 -nographic -enable-kvm -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 -smp sockets=2,cores=2,threads=1 -usb -usbdevice mouse -usbdevice tablet -soundhw all -kernel ./bzImage -append console=ttyS0 root=/dev/sda debug earlyprintk=serial slub_debug=UZ
>
> any suggestions?
>
> ==================================================================
> BUG: KASan: out of bounds access in usage_match+0x63/0x70 at addr ffff88002c81ff40
> Read of size 8 by task khubd/923
> =============================================================================
> BUG kmalloc-4096 (Not tainted): kasan: bad access detected
> -----------------------------------------------------------------------------
>
> Disabling lock debugging due to kernel taint
> INFO: Allocated in input_dev_pm_ops+0x520/0x5e0 age=131944943344261 cpu=0 pid=-536871936
> 0x41b58ab3
> [< none >] vsock_dgram_ops+0x337bd3/0x3a5a50 ??:?
> [< none >] sysfs_new_dirent+0x0/0x410 /linux-stable/fs/sysfs/dir.c:1027
> 0xffff88002c8209d8
> 0xffffed000590413c
> 0xdffffc0000000000
> 0xffff88002c8209e0
> 0xffff88002c820920
> [< none >] mutex_unlock+0x15/0x20 /linux-stable/kernel/mutex.c:252
> 0x1ffff1000590412f
> 0xffff88002c820958
> [< none >] sysfs_attr_ns+0x162/0x260 /linux-stable/fs/sysfs/file.c:522
> 0x1ffff1000590412f
> 0xffff88002c820a18
> [< none >] dev_attr_uniq+0x0/0x60 arch/x86/crypto/sha512-avx2-asm.o:?
> 0xffff8800280feae0
> INFO: Freed in sysfs_add_file_mode+0x141/0x2d0 age=6421765850 cpu=746719736 pid=-30720
> 0x1242cf991f0
> 0xffffffff00000002
> 0x41b58ab3
> [< none >] vsock_dgram_ops+0x337b87/0x3a5a50 ??:?
> [< none >] sysfs_add_file_mode+0x0/0x2d0 /linux-stable/fs/sysfs/file.c:693
> 0xffff88002cf998c8
> INFO: Slab 0xffffea0000b20600 objects=7 used=0 fp=0xffff88002c818000 flags=0x1fc000000004080
> INFO: Object 0xffff88002c81f8c0 @offset=30912 fp=0x0000000000000002
>
>
> Redzone ffff88002c8208c0: 1a 41 90 05 00 f1 ff 1f .A......
> Padding ffff88002c8209f8: 40 0a 82 2c 00 88 ff ff @..,....
> CPU: 0 PID: 923 Comm: khubd Tainted: G B 3.10.102+ #2
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org <http://qemu-project.org> 04/01/2014
> ffff88002c818000 ffff88002c81fc60 ffffffff850cbe98 ffff88002c81fc90
> ffffffff81584f48 ffff88002d806f40 ffffea0000b20600 ffff88002c81f8c0
> 0000000000000000 ffff88002c81fcb8 ffffffff8158b731 ffffed0005903fe8
> Call Trace:
> Memory state around the buggy address:
> ffff88002c81fe00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> ffff88002c81fe80: fc fc f1 f1 f1 f1 00 f4 f4 f4 f2 f2 f2 f2 00 f4
>>ffff88002c81ff00: f4 f4 f2 f2 f2 f2 fc fc fc fc fc fc fc fc f2 f2
> ^
> ffff88002c81ff80: f2 f2 fc fc fc fc fc fc fc fc f3 f3 f3 f3 fc fc
> ffff88002c820000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> ==================================================================
> kasan: CONFIG_KASAN_INLINE enabled
> kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] SMP KASAN
> Modules linked in:
> CPU: 0 PID: 923 Comm: khubd Tainted: G B 3.10.102+ #2
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org <http://qemu-project.org> 04/01/2014
> task: ffff88002cf991f0 ti: ffff88002c820000 task.ti: ffff88002c820000
> RIP: 0010:[<ffffffff8134328b>] [<ffffffff8134328b>] cpuacct_charge+0x1ab/0x490
> RSP: 0000:ffff88002de03be0 EFLAGS: 00010046
> RAX: dffffc001d5585dc RBX: 000000000000c5a0 RCX: 00000000eaac2ee0
> RDX: ffffffff869c2c60 RSI: 1ffffffff0c1a6c0 RDI: ffffffff860d3600
> RBP: ffff88002de03c28 R08: 0000000000000001 R09: 0000000000000001
> R10: 0000000000000020 R11: ffffed000fffb001 R12: ffffffff860d35a0
> R13: dffffc0000000000 R14: 00000000134c2dae R15: 000000002c820050
> FS: 0000000000000000(0000) GS:ffff88002de00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 00000000ffffffff CR3: 000000000600d000 CR4: 00000000000006f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Stack:
> ffffffff81343182 00000000146efbea ffff88007ffd8008 ffff88007ffd801c
> ffff88002cf99238 ffff88002de124a8 0000000ee4d60d04 00000000134c2dae
> ffff88002cf99278 ffff88002de03c78 ffffffff81317811 ffffffff8119be42
> Call Trace:
> <IRQ>
> [< inline >] ? __rcu_read_lock /linux-stable/include/linux/rcupdate.h:198
> [< inline >] ? rcu_read_lock /linux-stable/include/linux/rcupdate.h:776
> [<ffffffff81343182>] ? cpuacct_charge+0xa2/0x490 /linux-stable/kernel/sched/cpuacct.c:253
> [<ffffffff81317811>] update_curr+0x291/0x610 /linux-stable/kernel/sched/fair.c:711
> [<ffffffff8119be42>] ? kvm_clock_read+0x62/0xc0 /linux-stable/arch/x86/kernel/kvmclock.c:88
> [< inline >] entity_tick /linux-stable/kernel/sched/fair.c:1987
> [<ffffffff8131c070>] task_tick_fair+0x60/0x1430 /linux-stable/kernel/sched/fair.c:5778
> [<ffffffff81309e68>] ? sched_clock_cpu+0x108/0x1b0 /linux-stable/kernel/sched/clock.c:258
> [<ffffffff812ff07a>] scheduler_tick+0x29a/0x510 /linux-stable/kernel/sched/core.c:2748
> [<ffffffff81281971>] update_process_times+0xa1/0xc0 /linux-stable/kernel/timer.c:1362
> [<ffffffff81372528>] tick_sched_handle.isra.14+0xb8/0xf0 /linux-stable/kernel/time/tick-sched.c:146
> [<ffffffff813725d0>] tick_sched_timer+0x70/0xa0 /linux-stable/kernel/time/tick-sched.c:1100
> [<ffffffff812d39f7>] __run_hrtimer+0x127/0xd90 /linux-stable/kernel/hrtimer.c:1276
> [<ffffffff81372560>] ? tick_sched_handle.isra.14+0xf0/0xf0 /linux-stable/kernel/time/tick-sched.c:143
> [<ffffffff812d637d>] hrtimer_interrupt+0x32d/0x780 /linux-stable/kernel/hrtimer.c:1365
> [<ffffffff812d6050>] ? hrtimer_get_next_event+0x150/0x150 /linux-stable/kernel/hrtimer.c:1183
> [<ffffffff81377c52>] ? trace_hardirqs_off+0x12/0x20 /linux-stable/kernel/lockdep.c:2642
> [<ffffffff81424e79>] ? rcu_irq_enter+0xb9/0x120 /linux-stable/kernel/rcutree.c:627
> [< inline >] local_apic_timer_interrupt /linux-stable/arch/x86/kernel/apic/apic.c:911
> [<ffffffff81186547>] smp_apic_timer_interrupt+0xe7/0x180 /linux-stable/arch/x86/kernel/apic/apic.c:938
> [<ffffffff8510a0b2>] apic_timer_interrupt+0x72/0x80 /linux-stable/arch/x86/kernel/entry_64.S:1188
> <EOI>
> [< inline >] ? arch_local_irq_restore /linux-stable/arch/x86/include/asm/paravirt.h:829
> [< inline >] ? buffered_rmqueue /linux-stable/mm/page_alloc.c:1536
> [<ffffffff814d809e>] ? get_page_from_freelist+0x91e/0x19b0 /linux-stable/mm/page_alloc.c:1974
> [<ffffffff81377f19>] ? check_chain_key+0x2b9/0x4d0 /linux-stable/kernel/lockdep.c:2177
> [<ffffffff81377f19>] ? check_chain_key+0x2b9/0x4d0 /linux-stable/kernel/lockdep.c:2177
> [<ffffffff814d7780>] ? free_reserved_area+0x1a0/0x1a0 /linux-stable/arch/x86/include/asm/page_64.h:17
> [< inline >] ? arch_local_irq_restore /linux-stable/arch/x86/include/asm/paravirt.h:829
> [<ffffffff813813f3>] ? lock_is_held+0x153/0x1c0 /linux-stable/kernel/lockdep.c:3640
> [<ffffffff814d994e>] __alloc_pages_nodemask+0x28e/0x14e0 /linux-stable/mm/page_alloc.c:2663
> [<ffffffff813818e0>] ? debug_show_all_locks+0x480/0x480 /linux-stable/kernel/lockdep.c:4162
> [<ffffffff81377f19>] ? check_chain_key+0x2b9/0x4d0 /linux-stable/kernel/lockdep.c:2177
> [<ffffffff813a1303>] ? __module_text_address+0x13/0x150 /linux-stable/kernel/module.c:3845
> [<ffffffff8158df07>] ? __asan_report_store8_noabort+0x17/0x20 /linux-stable/mm/kasan/report.c:272
> [<ffffffff814d96c0>] ? __alloc_pages_direct_compact+0x590/0x590 /linux-stable/include/linux/compaction.h:59
> [<ffffffff813823a8>] ? __lock_acquire+0xac8/0x49c0 /linux-stable/kernel/lockdep.c:3081
> [< inline >] ? debug_spin_unlock /linux-stable/lib/spinlock_debug.c:102
> [<ffffffff82668db0>] ? do_raw_spin_unlock+0x100/0x260 /linux-stable/lib/spinlock_debug.c:158
> [<ffffffff813818e0>] ? debug_show_all_locks+0x480/0x480 /linux-stable/kernel/lockdep.c:4162
> [<ffffffff81377f19>] ? check_chain_key+0x2b9/0x4d0 /linux-stable/kernel/lockdep.c:2177
> [<ffffffff813823a8>] ? __lock_acquire+0xac8/0x49c0 /linux-stable/kernel/lockdep.c:3081
> [<ffffffff815789c1>] alloc_pages_current+0x181/0x390 /linux-stable/mm/mempolicy.c:2051
> [< inline >] ? allocate_slab /linux-stable/mm/slub.c:1312
> [<ffffffff81586895>] ? new_slab+0x2e5/0x370 /linux-stable/mm/slub.c:1386
> [< inline >] alloc_pages /linux-stable/include/linux/gfp.h:334
> [< inline >] alloc_slab_page /linux-stable/mm/slub.c:1298
> [< inline >] allocate_slab /linux-stable/mm/slub.c:1322
> [<ffffffff815868bc>] new_slab+0x30c/0x370 /linux-stable/mm/slub.c:1386
> [< inline >] new_slab_objects /linux-stable/mm/slub.c:2162
> [<ffffffff81589364>] __slab_alloc+0x4b4/0x5d0 /linux-stable/mm/slub.c:2323
> [< inline >] ? kmem_cache_zalloc /linux-stable/include/linux/slab.h:509
> [<ffffffff8171b778>] ? sysfs_new_dirent+0xf8/0x410 /linux-stable/fs/sysfs/dir.c:381
> [< inline >] ? kmem_cache_zalloc /linux-stable/include/linux/slab.h:509
> [<ffffffff8171b778>] ? sysfs_new_dirent+0xf8/0x410 /linux-stable/fs/sysfs/dir.c:381
> [< inline >] ? arch_local_irq_restore /linux-stable/arch/x86/include/asm/paravirt.h:829
> [<ffffffff813813f3>] ? lock_is_held+0x153/0x1c0 /linux-stable/kernel/lockdep.c:3640
> [< inline >] ? kmem_cache_zalloc /linux-stable/include/linux/slab.h:509
> [<ffffffff8171b778>] ? sysfs_new_dirent+0xf8/0x410 /linux-stable/fs/sysfs/dir.c:381
> [< inline >] slab_alloc_node /linux-stable/mm/slub.c:2397
> [< inline >] slab_alloc /linux-stable/mm/slub.c:2437
> [<ffffffff81589663>] kmem_cache_alloc+0x1e3/0x220 /linux-stable/mm/slub.c:2442
> [< inline >] ? __mutex_unlock_common_slowpath /linux-stable/kernel/mutex.c:479
> [<ffffffff850e6a87>] ? __mutex_unlock_slowpath+0x257/0x410 /linux-stable/kernel/mutex.c:488
> [< inline >] kmem_cache_zalloc /linux-stable/include/linux/slab.h:509
> [<ffffffff8171b778>] sysfs_new_dirent+0xf8/0x410 /linux-stable/fs/sysfs/dir.c:381
> [<ffffffff8171b680>] ? sysfs_readdir+0x7d0/0x7d0 /linux-stable/fs/sysfs/dir.c:1027
> [<ffffffff850e6c55>] ? mutex_unlock+0x15/0x20 /linux-stable/kernel/mutex.c:252
> [<ffffffff81717412>] ? sysfs_attr_ns+0x162/0x260 /linux-stable/fs/sysfs/file.c:522
> [<ffffffff81719161>] sysfs_add_file_mode+0x141/0x2d0 /linux-stable/fs/sysfs/file.c:539
> [<ffffffff81719020>] ? sysfs_remove_file_from_group+0x170/0x170 /linux-stable/fs/sysfs/file.c:693
> [< inline >] ? __mutex_unlock_common_slowpath /linux-stable/kernel/mutex.c:479
> [<ffffffff850e6a87>] ? __mutex_unlock_slowpath+0x257/0x410 /linux-stable/kernel/mutex.c:488
> [<ffffffff8138025a>] ? trace_hardirqs_on_caller+0x30a/0x690 /linux-stable/kernel/lockdep.c:2598
> [<ffffffff813805f2>] ? trace_hardirqs_on+0x12/0x20 /linux-stable/kernel/lockdep.c:2604
> [<ffffffff850e6a97>] ? __mutex_unlock_slowpath+0x267/0x410 /linux-stable/kernel/mutex.c:489
> [< inline >] create_files /linux-stable/fs/sysfs/group.c:48
> [<ffffffff81721b7f>] internal_create_group+0x31f/0x7b0 /linux-stable/fs/sysfs/group.c:82
> [<ffffffff81721860>] ? unmap_bin_file+0x1b0/0x1b0 ??:?
> [<ffffffff8171e330>] ? sysfs_rename_link+0x2d0/0x2d0 /linux-stable/fs/sysfs/symlink.c:214
> [<ffffffff8172202f>] sysfs_create_group+0x1f/0x30 /linux-stable/fs/sysfs/group.c:104
> [<ffffffff82c2d9ab>] device_add_groups+0xab/0x150 /linux-stable/drivers/base/core.c:472
> [< inline >] device_add_attrs /linux-stable/drivers/base/core.c:510
> [<ffffffff82c3218b>] device_add+0xd1b/0x1710 /linux-stable/drivers/base/core.c:1080
> [<ffffffff82c31470>] ? device_private_init+0x190/0x190 /linux-stable/drivers/base/core.c:975
> [< inline >] ? do_init_timer /linux-stable/kernel/timer.c:634
> [<ffffffff8127cad7>] ? init_timer_key+0x157/0x4b0 /linux-stable/kernel/timer.c:652
> [<ffffffff83717713>] input_register_device+0x503/0xc90 /linux-stable/drivers/input/input.c:2085
> [<ffffffff83ef6dfa>] hidinput_connect+0xe4a/0xb550 /linux-stable/drivers/hid/hid-input.c:1385
> [<ffffffff83ef5fb0>] ? hid_map_usage_clear.constprop.5+0x160/0x160 /linux-stable/include/linux/hid.h:817
> [<ffffffff83f24520>] ? hid_irq_out+0x2e0/0x2e0 /linux-stable/drivers/hid/usbhid/hid-core.c:458
> [<ffffffff812ca250>] ? wake_up_bit+0xf0/0xf0 /linux-stable/include/linux/list.h:188
> [<ffffffff813805f2>] ? trace_hardirqs_on+0x12/0x20 /linux-stable/kernel/lockdep.c:2604
> [< inline >] ? __raw_spin_unlock_irqrestore /linux-stable/include/linux/spinlock_api_smp.h:162
> [<ffffffff850ef48b>] ? _raw_spin_unlock_irqrestore+0x4b/0xb0 /linux-stable/kernel/spinlock.c:177
> [< inline >] ? spin_unlock_irqrestore /linux-stable/include/linux/spinlock.h:348
> [<ffffffff83f2991e>] ? usbhid_submit_report+0x6e/0x80 /linux-stable/drivers/hid/usbhid/hid-core.c:648
> [<ffffffff83eeb2b3>] hid_connect+0x923/0xc70 /linux-stable/drivers/hid/hid-core.c:1479
> [<ffffffff8158d3b1>] ? memset+0x31/0x40 /linux-stable/mm/kasan/kasan.c:278
> [<ffffffff83eea990>] ? extract+0xc0/0xc0 /linux-stable/drivers/hid/hid-core.c:998
> [< inline >] hid_hw_start /linux-stable/include/linux/hid.h:886
> [<ffffffff83eef381>] hid_device_probe+0x301/0x500 /linux-stable/drivers/hid/hid-core.c:1955
> [<ffffffff83eef080>] ? hid_add_device+0x9e0/0x9e0 /linux-stable/drivers/hid/hid-core.c:685
> [< inline >] really_probe /linux-stable/drivers/base/dd.c:302
> [<ffffffff82c3a8aa>] driver_probe_device+0x15a/0xad0 /linux-stable/drivers/base/dd.c:399
> [<ffffffff82c3b220>] ? driver_probe_device+0xad0/0xad0 /linux-stable/drivers/base/dd.c:313
> [<ffffffff82c3b2b0>] __device_attach+0x90/0xc0 /linux-stable/drivers/base/dd.c:412
> [<ffffffff82c34b7a>] bus_for_each_drv+0x13a/0x1d0 /linux-stable/drivers/base/bus.c:451
> [<ffffffff82c34a40>] ? bus_rescan_devices+0x30/0x30 /linux-stable/drivers/base/bus.c:797
> [<ffffffff82c3a68b>] device_attach+0x12b/0x180 /linux-stable/drivers/base/dd.c:447
> [<ffffffff82c38166>] bus_probe_device+0x1e6/0x2d0 /linux-stable/drivers/base/bus.c:541
> [<ffffffff82c323aa>] device_add+0xf3a/0x1710 /linux-stable/drivers/base/core.c:1099
> [<ffffffff850e6c55>] ? mutex_unlock+0x15/0x20 /linux-stable/kernel/mutex.c:252
> [<ffffffff82c31470>] ? device_private_init+0x190/0x190 /linux-stable/drivers/base/core.c:975
> [<ffffffff820a0d01>] ? debugfs_create_file+0x51/0x70 /linux-stable/fs/debugfs/inode.c:403
> [<ffffffff83eee98b>] hid_add_device+0x2eb/0x9e0 /linux-stable/drivers/hid/hid-core.c:2406
> [<ffffffff83eee6a0>] ? hid_ignore+0x80/0x80 /linux-stable/drivers/hid/hid-core.c:2295
> [<ffffffff83f2bc6a>] usbhid_probe+0xb1a/0x1100 /linux-stable/drivers/hid/usbhid/hid-core.c:1364
> [<ffffffff8355e649>] usb_probe_interface+0x319/0x6e0 /linux-stable/drivers/usb/core/driver.c:335
> [<ffffffff8355e330>] ? usb_match_dynamic_id+0x100/0x100 /linux-stable/drivers/usb/core/driver.c:202
> [< inline >] really_probe /linux-stable/drivers/base/dd.c:302
> [<ffffffff82c3a8aa>] driver_probe_device+0x15a/0xad0 /linux-stable/drivers/base/dd.c:399
> [<ffffffff82c3b220>] ? driver_probe_device+0xad0/0xad0 /linux-stable/drivers/base/dd.c:313
> [<ffffffff82c3b2b0>] __device_attach+0x90/0xc0 /linux-stable/drivers/base/dd.c:412
> [<ffffffff82c34b7a>] bus_for_each_drv+0x13a/0x1d0 /linux-stable/drivers/base/bus.c:451
> [<ffffffff82c34a40>] ? bus_rescan_devices+0x30/0x30 /linux-stable/drivers/base/bus.c:797
> [<ffffffff82c3a68b>] device_attach+0x12b/0x180 /linux-stable/drivers/base/dd.c:447
> [<ffffffff82c38166>] bus_probe_device+0x1e6/0x2d0 /linux-stable/drivers/base/bus.c:541
> [<ffffffff82c323aa>] device_add+0xf3a/0x1710 /linux-stable/drivers/base/core.c:1099
> [< inline >] ? __mutex_unlock_common_slowpath /linux-stable/kernel/mutex.c:479
> [<ffffffff850e6a87>] ? __mutex_unlock_slowpath+0x257/0x410 /linux-stable/kernel/mutex.c:488
> [<ffffffff82c31470>] ? device_private_init+0x190/0x190 /linux-stable/drivers/base/core.c:975
> [<ffffffff850e6c55>] ? mutex_unlock+0x15/0x20 /linux-stable/kernel/mutex.c:252
> [< inline >] ? usb_device_supports_ltm /linux-stable/include/linux/usb.h:699
> [<ffffffff83531e87>] ? usb_enable_ltm+0x97/0x350 /linux-stable/drivers/usb/core/hub.c:2855
> [<ffffffff8355a6d9>] usb_set_configuration+0xce9/0x17c0 /linux-stable/drivers/usb/core/message.c:1898
> [<ffffffff83576afc>] generic_probe+0x6c/0xe0 /linux-stable/drivers/usb/core/generic.c:171
> [<ffffffff8355c20f>] usb_probe_device+0x6f/0xc0 /linux-stable/drivers/usb/core/driver.c:231
> [<ffffffff8355c1a0>] ? usb_register_device_driver+0x2a0/0x2a0 /linux-stable/drivers/usb/core/driver.c:841
> [< inline >] really_probe /linux-stable/drivers/base/dd.c:302
> [<ffffffff82c3a8aa>] driver_probe_device+0x15a/0xad0 /linux-stable/drivers/base/dd.c:399
> [<ffffffff82c3b220>] ? driver_probe_device+0xad0/0xad0 /linux-stable/drivers/base/dd.c:313
> [<ffffffff82c3b2b0>] __device_attach+0x90/0xc0 /linux-stable/drivers/base/dd.c:412
> [<ffffffff82c34b7a>] bus_for_each_drv+0x13a/0x1d0 /linux-stable/drivers/base/bus.c:451
> [<ffffffff82c34a40>] ? bus_rescan_devices+0x30/0x30 /linux-stable/drivers/base/bus.c:797
> [<ffffffff82c3a68b>] device_attach+0x12b/0x180 /linux-stable/drivers/base/dd.c:447
> [<ffffffff82c38166>] bus_probe_device+0x1e6/0x2d0 /linux-stable/drivers/base/bus.c:541
> [<ffffffff82c323aa>] device_add+0xf3a/0x1710 /linux-stable/drivers/base/core.c:1099
> [<ffffffff82c2fd70>] ? dev_notice+0xf0/0xf0 /linux-stable/drivers/base/core.c:2039
> [<ffffffff829ea425>] ? add_device_randomness+0xe5/0x130 /linux-stable/drivers/char/random.c:651
> [<ffffffff82c31470>] ? device_private_init+0x190/0x190 /linux-stable/drivers/base/core.c:975
> [< inline >] ? slab_free /linux-stable/mm/slub.c:2661
> [<ffffffff81588681>] ? kfree+0x271/0x290 /linux-stable/mm/slub.c:3411
> [<ffffffff82c393ea>] ? dev_get_drvdata+0x6a/0x90 /linux-stable/drivers/base/dd.c:598
> [<ffffffff8353c5bd>] usb_new_device+0x76d/0xd20 /linux-stable/drivers/usb/core/hub.c:2399
> [< inline >] hub_port_connect_change /linux-stable/drivers/usb/core/hub.c:4604
> [< inline >] hub_events /linux-stable/drivers/usb/core/hub.c:4893
> [<ffffffff835402bb>] hub_thread+0x138b/0x3ea0 /linux-stable/drivers/usb/core/hub.c:4953
> [<ffffffff8353ef30>] ? hub_port_debounce+0x310/0x310 /linux-stable/drivers/usb/core/hub.c:3965
> [< inline >] ? arch_local_irq_restore /linux-stable/arch/x86/include/asm/paravirt.h:829
> [<ffffffff813885d0>] ? lock_acquire+0x1b0/0x520 /linux-stable/kernel/lockdep.c:3604
> [<ffffffff8132a34b>] ? idle_balance+0x45b/0x6e0 /linux-stable/kernel/sched/fair.c:5306
> [< inline >] ? debug_spin_lock_after /linux-stable/lib/spinlock_debug.c:91
> [<ffffffff826689ab>] ? do_raw_spin_lock+0x20b/0x400 /linux-stable/lib/spinlock_debug.c:138
> [<ffffffff812f3960>] ? perf_trace_sched_process_exec+0x460/0x460 /linux-stable/arch/x86/include/asm/stacktrace.h:112
> [<ffffffff81377f19>] ? check_chain_key+0x2b9/0x4d0 /linux-stable/kernel/lockdep.c:2177
> [<ffffffff8137fecd>] ? mark_held_locks+0x2ad/0x330 /linux-stable/kernel/lockdep.c:2525
> [< inline >] ? __raw_spin_unlock_irq /linux-stable/include/linux/spinlock_api_smp.h:169
> [<ffffffff850ef3ec>] ? _raw_spin_unlock_irq+0x2c/0x80 /linux-stable/kernel/spinlock.c:185
> [<ffffffff8138025a>] ? trace_hardirqs_on_caller+0x30a/0x690 /linux-stable/kernel/lockdep.c:2598
> [<ffffffff813805f2>] ? trace_hardirqs_on+0x12/0x20 /linux-stable/kernel/lockdep.c:2604
> [< inline >] ? __raw_spin_unlock_irq /linux-stable/include/linux/spinlock_api_smp.h:169
> [<ffffffff850ef3ec>] ? _raw_spin_unlock_irq+0x2c/0x80 /linux-stable/kernel/spinlock.c:185
> [< inline >] ? finish_lock_switch /linux-stable/kernel/sched/sched.h:848
> [<ffffffff812ed159>] ? finish_task_switch+0xf9/0x260 /linux-stable/kernel/sched/core.c:1900
> [< inline >] ? finish_lock_switch /linux-stable/kernel/sched/sched.h:839
> [<ffffffff812ed12d>] ? finish_task_switch+0xcd/0x260 /linux-stable/kernel/sched/core.c:1900
> [<ffffffff812ca250>] ? wake_up_bit+0xf0/0xf0 /linux-stable/include/linux/list.h:188
> [<ffffffff812c72ed>] ? __kthread_parkme+0xed/0x170 /linux-stable/kernel/kthread.c:162
> [<ffffffff8353ef30>] ? hub_port_debounce+0x310/0x310 /linux-stable/drivers/usb/core/hub.c:3965
> [<ffffffff812c8283>] kthread+0x1d3/0x240 /linux-stable/drivers/block/aoe/aoecmd.c:1303
> [<ffffffff812c80b0>] ? kthread_worker_fn+0x530/0x530 /linux-stable/include/linux/list.h:27
> [<ffffffff812fda31>] ? schedule_tail+0x31/0x210 /linux-stable/kernel/sched/core.c:1963
> [<ffffffff812c80b0>] ? kthread_worker_fn+0x530/0x530 /linux-stable/include/linux/list.h:27
> [<ffffffff85109218>] ret_from_fork+0x58/0x90 /linux-stable/arch/x86/kernel/entry_64.S:573
> [<ffffffff812c80b0>] ? kthread_worker_fn+0x530/0x530 /linux-stable/include/linux/list.h:27
> Code: 0f 85 17 02 00 00 4c 8b 63 68 4d 85 e4 74 77 49 8d 7c 24 60 48 89 fe 48 c1 ee 03 42 80 3c 2e 00 0f 85 2d 02 00 00 49 8b 5c 24 60 <80> 38 00 0f 85 b7 02 00 00 4a 03 1c fa 48 89 de 48 c1 ee 03 42
> RIP [<ffffffff8134328b>] cpuacct_charge+0x1ab/0x490 /linux-stable/kernel/sched/cpuacct.c:258
> RSP <ffff88002de03be0>
> ---[ end trace 4d690b5b318b4d40 ]---
> Kernel panic - not syncing: Fatal exception in interrupt
>
>
>
> 2016-06-20 22:06 GMT+08:00 Kuthonuzo Luruo <poll.stdin@xxxxxxxxx <mailto:poll.stdin@xxxxxxxxx>>:
>
> Heh, I backported KASAN to 2.6.32 kernel. Biggest difficulty was shadow memory inititialization due to differences in early boot code with 4.x kernel.
>
> Kuthonuzo
>
>
> On Mon, Jun 20, 2016 at 7:10 PM, 'Alexander Potapenko' via syzkaller <syzkaller@xxxxxxxxxxxxxxxx <mailto:syzkaller@xxxxxxxxxxxxxxxx>> wrote:
>
> Hi,
>
> On Mon, Jun 20, 2016 at 3:36 PM, Baozeng <sploving1@xxxxxxxxx <mailto:sploving1@xxxxxxxxx>> wrote:
> > Hello all,
> > As we know syzkaller could use KASAN to find more memory bugs. Has
> > anyone ported KASAN to older version of kernel, for instance 3.10 ? (Most
> > of current android's kernel version is 3.10 or evern older). Thanks.
>
> I've ported KASAN to 3.14 and 3.18, but I wouldn't call that a
> pleasant experience. Feel free to ask your questions though.
> > Best Regards,
> > Baozeng
> >
> > 2016-06-15 17:02 GMT+08:00 Alexander Potapenko <glider@xxxxxxxxxx <mailto:glider@xxxxxxxxxx>>:
> >>
> >> Baozeng,
> >>
> >> In order to use ConsoleDev you'll need a serial port support in the
> >> kernel, and an external serial port attached to the Android device.
> >> If you don't have a serial port, you'll probably need to change adb.go
> >> to read the dmesg output from adb shell.
> >>
> >> HTH,
> >> Alex
> >>
> >> On Wed, Jun 15, 2016 at 2:46 AM, Baozeng <sploving1@xxxxxxxxx <mailto:sploving1@xxxxxxxxx>> wrote:
> >> > Thank you Alexander. We will have a try.
> >> > Dmitry, I have another stupid question. I took a look at the adb.go, and
> >> > find a ConsoleDev config. Could you give me an example how to use it?
> >> > how
> >> > to use a "cat " command to get the log from the console device. Does it
> >> > need
> >> > to install any other tool to debug the android device, like this
> >> > https://developer.chrome.com/devtools/docs/remote-debugging? Thank you
> >> > in
> >> > advance.
> >> >
> >> > 2016-06-14 21:32 GMT+08:00 Alexander Potapenko <glider@xxxxxxxxxx <mailto:glider@xxxxxxxxxx>>:
> >> >>
> >> >> Hi Baozeng,
> >> >>
> >> >> You may want to take a look at the discussion at
> >> >>
> >> >>
> >> >> http://lists.infradead.org/pipermail/linux-arm-kernel/2016-March/419034.html,
> >> >> namely at the list of files for which kcov instrumentation should be
> >> >> disabled.
> >> >> If your kernel doesn't boot, try carpet-disabling arch/arm64/boot/*
> >> >> and arch/arm64/kernel/*, and then you can bisect further.
> >> >>
> >> >> Alex
> >> >>
> >> >> On Tue, Jun 14, 2016 at 11:31 AM, Dmitry Vyukov <dvyukov@xxxxxxxxx <mailto:dvyukov@xxxxxxxxx>>
> >> >> wrote:
> >> >> > On Tue, Jun 14, 2016 at 11:21 AM, Baozeng <sploving1@xxxxxxxxx <mailto:sploving1@xxxxxxxxx>>
> >> >> > wrote:
> >> >> >> Hi Dmitry,
> >> >> >> We've ported kcov to arm64 android kernel (nexus 6P device).
> >> >> >> But
> >> >> >> it
> >> >> >> cannot boot. The size of the kernel is 1.3 M larger than the origin
> >> >> >> one
> >> >> >> without kcov. Does this affect the booting of the android device?
> >> >> >
> >> >> > +syzkaller mailing list
> >> >> >
> >> >> > Hi Baozeng,
> >> >> >
> >> >> > We've ported kcov to arm64 and use it with some Android devices.
> >> >> > +Alexander knows more. Did we mail the patches upstream?
> >> >> >
> >> >> > The boot issue is most likely to bad interaction of kcov
> >> >> > instrumentation with some early bootstrap files. Most likely you need
> >> >> > to disable instrumentation of some boot files.
> >> >> >
> >> >> > --
> >> >> > You received this message because you are subscribed to the Google
> >> >> > Groups "syzkaller" group.
> >> >> > To unsubscribe from this group and stop receiving emails from it,
> >> >> > send
> >> >> > an email to syzkaller+unsubscribe@xxxxxxxxxxxxxxxx <mailto:syzkaller%2Bunsubscribe@xxxxxxxxxxxxxxxx>.
> >> >> > For more options, visit https://groups.google.com/d/optout.
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Alexander Potapenko
> >> >> Software Engineer
> >> >>
> >> >> Google Germany GmbH
> >> >> Erika-Mann-StraÃe, 33
> >> >> 80636 MÃnchen
> >> >>
> >> >> GeschÃftsfÃhrer: Matthew Scott Sucherman, Paul Terence Manicle
> >> >> Registergericht und -nummer: Hamburg, HRB 86891
> >> >> Sitz der Gesellschaft: Hamburg
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Best Regards,
> >> > Baozeng Ding
> >> >
> >>
> >>
> >>
> >> --
> >> Alexander Potapenko
> >> Software Engineer
> >>
> >> Google Germany GmbH
> >> Erika-Mann-StraÃe, 33
> >> 80636 MÃnchen
> >>
> >> GeschÃftsfÃhrer: Matthew Scott Sucherman, Paul Terence Manicle
> >> Registergericht und -nummer: Hamburg, HRB 86891
> >> Sitz der Gesellschaft: Hamburg
> >
> >
> >
> >
> > --
> > Best Regards,
> > Baozeng Ding
> >
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-StraÃe, 33
> 80636 MÃnchen
>
> GeschÃftsfÃhrer: Matthew Scott Sucherman, Paul Terence Manicle
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@xxxxxxxxxxxxxxxx <mailto:syzkaller%2Bunsubscribe@xxxxxxxxxxxxxxxx>.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
>
> --
> Best Regards,
> Baozeng Ding
>
commit 5ca47dfed22e5bae58d6c61f0a48487970d1e7ed
Author: Baozeng Ding <sploving1@xxxxxxxxx>
Date: Sun Jul 3 12:01:19 2016 +0800

3.10.102+: Support KASAN for x86_64

Borrowed Andrey Ryabinin's work: KASAN backport for vzkernel:
https://lists.openvz.org/pipermail/devel/2015-August/066379.html
This patch adds support KASAN for 3.10.1022+ (x86_64)

diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..ee36ef1
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,169 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASan uses compile-time instrumentation for checking every memory access,
+therefore you will need a certain version of GCC > 4.9.2
+
+Currently KASan is supported only for x86_64 architecture and requires that the
+kernel be built with the SLUB allocator.
+
+1. Usage
+=========
+
+To enable KASAN configure kernel with:
+
+ CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline
+is compiler instrumentation types. The former produces smaller binary the
+latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or
+latter.
+
+Currently KASAN works only with the SLUB memory allocator.
+For better bug detection and nicer report and enable CONFIG_STACKTRACE.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+ For a single file (e.g. main.o):
+ KASAN_SANITIZE_main.o := n
+
+ For all files in one directory:
+ KASAN_SANITIZE := n
+
+1.1 Error reports
+==========
+
+A typical out of bounds access report looks like this:
+
+==================================================================
+BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+Write of size 1 by task modprobe/1689
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
+Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk.
+Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........
+Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
+CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+>ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+ ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+==================================================================
+
+First sections describe slub object where bad access happened.
+See 'SLUB Debug output' section in Documentation/vm/slub.txt for details.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more understanding of how KASAN works.
+
+Each 8 bytes of memory are encoded in one shadow byte as accessible,
+partially accessible, freed or they can be part of a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+2. Implementation details
+========================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function witch translate an address to its corresponding shadow
+address:
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+ return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index bd43704..b054443 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -14,6 +14,9 @@ ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
... unused hole ...
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
... unused hole ...
+ ... unused hole ...
+ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB)
+... unused hole ...
ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - ffffffffff5fffff (=1525 MB) module mapping space
ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
diff --git a/Makefile b/Makefile
index 868093c..73cff1c 100644
--- a/Makefile
+++ b/Makefile
@@ -394,7 +394,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS

export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -671,6 +671,8 @@ ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y)
KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
endif

+include $(srctree)/scripts/Makefile.kasan
+
# Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments
KBUILD_CPPFLAGS += $(KCPPFLAGS)
KBUILD_AFLAGS += $(KAFLAGS)
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index af60478..2824951 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -40,7 +40,7 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, -1,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index ca0e3d5..c7bc3e6 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -29,8 +29,8 @@
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL_EXEC, -1,
- __builtin_return_address(0));
+ GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+ NUMA_NO_NODE, __builtin_return_address(0));
}

enum aarch64_reloc_op {
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index 977a623..1833f51 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -23,6 +23,7 @@
#include <linux/moduleloader.h>
#include <linux/elf.h>
#include <linux/mm.h>
+#include <linux/numa.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <linux/fs.h>
@@ -46,7 +47,7 @@ static DEFINE_SPINLOCK(dbe_lock);
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END,
- GFP_KERNEL, PAGE_KERNEL, -1,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
index 2a625fb..0d498ef 100644
--- a/arch/parisc/kernel/module.c
+++ b/arch/parisc/kernel/module.c
@@ -219,7 +219,7 @@ void *module_alloc(unsigned long size)
* init_data correctly */
return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
GFP_KERNEL | __GFP_HIGHMEM,
- PAGE_KERNEL_RWX, -1,
+ PAGE_KERNEL_RWX, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}

diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 7845e15..411a7ee 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -50,7 +50,7 @@ void *module_alloc(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, -1,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
diff --git a/arch/sparc/kernel/module.c b/arch/sparc/kernel/module.c
index 4435488..192a617 100644
--- a/arch/sparc/kernel/module.c
+++ b/arch/sparc/kernel/module.c
@@ -29,7 +29,7 @@ static void *module_map(unsigned long size)
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
- GFP_KERNEL, PAGE_KERNEL, -1,
+ GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#else
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 105ae30..c8de68d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -77,6 +77,7 @@ config X86
select HAVE_CMPXCHG_LOCAL
select HAVE_CMPXCHG_DOUBLE
select HAVE_ARCH_KMEMCHECK
+ select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
select HAVE_USER_RETURN_NOTIFIER
select ARCH_BINFMT_ELF_RANDOMIZE_PIE
select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 6cf0111..262eec5 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
# Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
# The number is the same as you would ordinarily press at bootup.

+KASAN_SANITIZE := n
+
SVGA_MODE := -DSVGA_MODE=NORMAL_VGA

targets := vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 7194d9f..e126f6b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -18,6 +18,7 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)

KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n
+KASAN_SANITIZE := n

LDFLAGS := -m elf_$(UTS_MACHINE)
LDFLAGS_vmlinux := -T
diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 1308bee..2a64063 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -7,14 +7,15 @@
*
* ----------------------------------------------------------------------- */

+#include "misc.h"
+#include <linux/types.h>
+#include "../string.h"
#include <linux/efi.h>
#include <linux/pci.h>
#include <asm/efi.h>
#include <asm/setup.h>
#include <asm/desc.h>

-#undef memcpy /* Use memcpy from misc.c */
-
#include "eboot.h"

static efi_system_table_t *sys_table;
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 674019d..768b889 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -7,6 +7,7 @@
* we just keep it from happening
*/
#undef CONFIG_PARAVIRT
+#undef CONFIG_KASAN
#ifdef CONFIG_X86_32
#define _ASM_X86_DESC_H 1
#endif
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
new file mode 100644
index 0000000..74a2a8d
--- /dev/null
+++ b/arch/x86/include/asm/kasan.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_X86_KASAN_H
+#define _ASM_X86_KASAN_H
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from compiler's shadow offset +
+ * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
+ */
+#define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET + \
+ (0xffff800000000000ULL >> 3))
+/* 47 bits for kernel address -> (47 - 3) bits for shadow */
+#define KASAN_SHADOW_END (KASAN_SHADOW_START + (1ULL << (47 - 3)))
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_KASAN
+void __init kasan_early_init(void);
+void __init kasan_init(void);
+#else
+static inline void kasan_early_init(void) { }
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+
+#endif
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..e466119 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -27,11 +27,12 @@ static __always_inline void *__inline_memcpy(void *to, const void *from, size_t
function. */

#define __HAVE_ARCH_MEMCPY 1
+extern void *__memcpy(void *to, const void *from, size_t len);
+
#ifndef CONFIG_KMEMCHECK
#if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4
extern void *memcpy(void *to, const void *from, size_t len);
#else
-extern void *__memcpy(void *to, const void *from, size_t len);
#define memcpy(dst, src, len) \
({ \
size_t __len = (len); \
@@ -53,9 +54,11 @@ extern void *__memcpy(void *to, const void *from, size_t len);

#define __HAVE_ARCH_MEMSET
void *memset(void *s, int c, size_t n);
+void *__memset(void *s, int c, size_t n);

#define __HAVE_ARCH_MEMMOVE
void *memmove(void *dest, const void *src, size_t count);
+void *__memmove(void *dest, const void *src, size_t count);

int memcmp(const void *cs, const void *ct, size_t count);
size_t strlen(const char *s);
@@ -63,6 +66,19 @@ char *strcpy(char *dest, const char *src);
char *strcat(char *dest, const char *src);
int strcmp(const char *cs, const char *ct);

+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#undef memcpy
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
#endif /* __KERNEL__ */

#endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 5ee2687..854b048 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -5,6 +5,7 @@
*/
#include <linux/errno.h>
#include <linux/compiler.h>
+#include <linux/kasan-checks.h>
#include <linux/thread_info.h>
#include <linux/string.h>
#include <asm/asm.h>
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 34df5c2..a2e1844 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -7,6 +7,7 @@
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/lockdep.h>
+#include <linux/kasan-checks.h>
#include <asm/alternative.h>
#include <asm/cpufeature.h>
#include <asm/page.h>
@@ -59,6 +60,7 @@ static inline unsigned long __must_check copy_from_user(void *to,
int sz = __compiletime_object_size(to);

might_fault();
+ kasan_check_write(to, n);
if (likely(sz == -1 || sz >= n))
n = _copy_from_user(to, from, n);
#ifdef CONFIG_DEBUG_VM
@@ -72,7 +74,7 @@ static __always_inline __must_check
int copy_to_user(void __user *dst, const void *src, unsigned size)
{
might_fault();
-
+ kasan_check_read(src, size);
return _copy_to_user(dst, src, size);
}

@@ -122,6 +124,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
static __always_inline __must_check
int __copy_from_user(void *dst, const void __user *src, unsigned size)
{
+ kasan_check_read(src, size);
might_fault();
return __copy_from_user_nocheck(dst, src, size);
}
@@ -181,6 +184,7 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
{
int ret = 0;

+ kasan_check_write(dst, size);
might_fault();
if (!__builtin_constant_p(size))
return copy_user_generic((__force void *)dst,
@@ -232,12 +236,14 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
static __must_check __always_inline int
__copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
{
+ kasan_check_write(dst, size);
return __copy_from_user_nocheck(dst, (__force const void *)src, size);
}

static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
+ kasan_check_read(src, size);
return __copy_to_user_nocheck((__force void *)dst, src, size);
}

@@ -248,6 +254,7 @@ static inline int
__copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{
might_sleep();
+ kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 1);
}

@@ -255,6 +262,7 @@ static inline int
__copy_from_user_inatomic_nocache(void *dst, const void __user *src,
unsigned size)
{
+ kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 0);
}

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 111eb35..c6fd092 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -3,6 +3,9 @@
#

extra-y := head_$(BITS).o head$(BITS).o head.o vmlinux.lds
+KASAN_SANITIZE_head$(BITS).o := n
+KASAN_SANITIZE_dumpstack.o := n
+KASAN_SANITIZE_dumpstack_$(BITS).o := n

CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)

diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index deb6421..b10f70a 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -258,7 +258,10 @@ int __kprobes __die(const char *str, struct pt_regs *regs, long err)
printk("SMP ");
#endif
#ifdef CONFIG_DEBUG_PAGEALLOC
- printk("DEBUG_PAGEALLOC");
+ printk("DEBUG_PAGEALLOC ");
+#endif
+#ifdef CONFIG_KASAN
+ printk("KASAN");
#endif
printk("\n");
if (notify_die(DIE_OOPS, str, regs, err,
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 3b861b7..55b7b06 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -27,6 +27,7 @@
#include <asm/bios_ebda.h>
#include <asm/bootparam_utils.h>
#include <asm/microcode.h>
+#include <asm/kasan.h>

/*
* Manage page tables very early on.
@@ -46,7 +47,7 @@ static void __init reset_early_page_tables(void)

next_early_pgt = 0;

- write_cr3(__pa(early_level4_pgt));
+ write_cr3(__pa_nodebug(early_level4_pgt));
}

/* Create a new PMD entry */
@@ -59,7 +60,7 @@ int __init early_make_pgtable(unsigned long address)
pmdval_t pmd, *pmd_p;

/* Invalid address or early pgt is done ? */
- if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt))
+ if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
return -1;

again:
@@ -158,9 +159,12 @@ void __init x86_64_start_kernel(char * real_mode_data)
/* Kill off the identity-map trampoline */
reset_early_page_tables();

- /* clear bss before set_intr_gate with early_idt_handler */
clear_bss();

+ clear_page(init_level4_pgt);
+
+ kasan_early_init();
+
for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)
set_intr_gate(i, &early_idt_handler_array[i]);
load_idt((const struct desc_ptr *)&idt_descr);
@@ -175,7 +179,6 @@ void __init x86_64_start_kernel(char * real_mode_data)
if (console_loglevel == 10)
early_printk("Kernel alive\n");

- clear_page(init_level4_pgt);
/* set init_level4_pgt kernel high mapping*/
init_level4_pgt[511] = early_level4_pgt[511];

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 54bf9c2..848e11a 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -538,3 +538,4 @@ ENTRY(nmi_idt_table)
__PAGE_ALIGNED_BSS
NEXT_PAGE(empty_zero_page)
.skip PAGE_SIZE
+
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 216a4d7..5a7c1ce 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -24,6 +24,7 @@
#include <linux/fs.h>
#include <linux/string.h>
#include <linux/kernel.h>
+#include <linux/kasan.h>
#include <linux/bug.h>
#include <linux/mm.h>
#include <linux/gfp.h>
@@ -45,11 +46,18 @@ do { \

void *module_alloc(unsigned long size)
{
+ void *p;
+
if (PAGE_ALIGN(size) > MODULES_LEN)
return NULL;
- return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
+ p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END,
GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL_EXEC,
- -1, __builtin_return_address(0));
+ 0, NUMA_NO_NODE, __builtin_return_address(0));
+ if (p && (kasan_module_alloc(p, size) < 0)) {
+ vfree(p);
+ return NULL;
+ }
+ return p;
}

#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 935aff3..ce3cb66 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -89,6 +89,7 @@
#include <asm/cacheflush.h>
#include <asm/processor.h>
#include <asm/bugs.h>
+#include <asm/kasan.h>

#include <asm/vsyscall.h>
#include <asm/cpu.h>
@@ -1144,6 +1145,8 @@ void __init setup_arch(char **cmdline_p)

x86_init.paging.pagetable_init();

+ kasan_init();
+
if (boot_cpu_data.cpuid_level >= 0) {
/* A CPU has %cr4 if and only if it has CPUID */
mmu_cr4_features = read_cr4();
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index b014d94..9c9079c 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -50,13 +50,19 @@ EXPORT_SYMBOL(csum_partial);
#undef memset
#undef memmove

+extern void *__memset(void *, int, __kernel_size_t);
+extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *, const void *, __kernel_size_t);
extern void *memset(void *, int, __kernel_size_t);
extern void *memcpy(void *, const void *, __kernel_size_t);
-extern void *__memcpy(void *, const void *, __kernel_size_t);
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memmove);

EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(__memcpy);
EXPORT_SYMBOL(memmove);

#ifndef CONFIG_DEBUG_VIRTUAL
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 56313a3..89b53c9 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -53,6 +53,8 @@
.Lmemcpy_e_e:
.previous

+.weak memcpy
+
ENTRY(__memcpy)
ENTRY(memcpy)
CFI_STARTPROC
@@ -199,8 +201,8 @@ ENDPROC(__memcpy)
* only outcome...
*/
.section .altinstructions, "a"
- altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
+ altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\
.Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c
- altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
+ altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \
.Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e
.previous
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 65268a6..9c4b530 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,10 @@
* Output:
* rax: dest
*/
+.weak memmove
+
ENTRY(memmove)
+ENTRY(__memmove)
CFI_STARTPROC

/* Handle more 32 bytes in loop */
@@ -220,4 +223,5 @@ ENTRY(memmove)
.Lmemmove_end_forward-.Lmemmove_begin_forward, \
.Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs
.previous
+ENDPROC(__memmove)
ENDPROC(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 2dcb380..6f44935 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -56,6 +56,8 @@
.Lmemset_e_e:
.previous

+.weak memset
+
ENTRY(memset)
ENTRY(__memset)
CFI_STARTPROC
@@ -147,8 +149,8 @@ ENDPROC(__memset)
* feature to implement the right patch order.
*/
.section .altinstructions,"a"
- altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
- .Lfinal-memset,.Lmemset_e-.Lmemset_c
- altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
- .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e
+ altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\
+ .Lfinal-__memset,.Lmemset_e-.Lmemset_c
+ altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \
+ .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e
.previous
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 23d8e5f..9cec934 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -18,6 +18,9 @@ obj-$(CONFIG_HIGHMEM) += highmem_32.o

obj-$(CONFIG_KMEMCHECK) += kmemcheck/

+KASAN_SANITIZE_kasan_init_$(BITS).o := n
+obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
+
obj-$(CONFIG_MMIOTRACE) += mmiotrace.o
mmiotrace-y := kmmio.o pf_in.o mmio-mod.o
obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
new file mode 100644
index 0000000..f9fb08e
--- /dev/null
+++ b/arch/x86/mm/kasan_init_64.c
@@ -0,0 +1,243 @@
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/bootmem.h>
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+
+#include <asm/tlbflush.h>
+#include <asm/sections.h>
+
+extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern struct range pfn_mapped[E820_X_MAX];
+
+static pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
+static pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
+static pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
+
+/*
+ * This page used as early shadow. We don't use empty_zero_page
+ * at early stages, stack instrumentation could write some garbage
+ * to this page.
+ * Latter we reuse it as zero shadow for large ranges of memory
+ * that allowed to access, but not instrumented by kasan
+ * (vmalloc/vmemmap ...).
+ */
+static unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
+
+static int __init map_range(struct range *range)
+{
+ unsigned long start;
+ unsigned long end;
+
+ start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
+ end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
+
+ /*
+ * end + 1 here is intentional. We check several shadow bytes in advance
+ * to slightly speed up fastpath. In some rare cases we could cross
+ * boundary of mapped shadow, so we just map some more here.
+ */
+ return vmemmap_populate(start, end + 1, pfn_to_nid(range->start));
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start < end; start += PGDIR_SIZE)
+ pgd_clear(pgd_offset_k(start));
+}
+
+static void __init kasan_map_early_shadow(pgd_t *pgd)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+
+ for (i = pgd_index(start); start < end; i++) {
+ pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud)
+ | _KERNPG_TABLE);
+ start += PGDIR_SIZE;
+ }
+}
+
+static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
+ unsigned long end)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+
+ while (addr + PAGE_SIZE <= end) {
+ WARN_ON(!pte_none(*pte));
+ set_pte(pte, __pte(__pa_nodebug(kasan_zero_page)
+ | __PAGE_KERNEL_RO));
+ addr += PAGE_SIZE;
+ pte = pte_offset_kernel(pmd, addr);
+ }
+ return 0;
+}
+
+static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
+ unsigned long end)
+{
+ int ret = 0;
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
+ WARN_ON(!pmd_none(*pmd));
+ set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
+ | _KERNPG_TABLE));
+ addr += PMD_SIZE;
+ pmd = pmd_offset(pud, addr);
+ }
+ if (addr < end) {
+ if (pmd_none(*pmd)) {
+ void *p = vmemmap_alloc_block(PAGE_SIZE, 0);
+ if (!p)
+ return -ENOMEM;
+ set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE));
+ }
+ ret = zero_pte_populate(pmd, addr, end);
+ }
+ return ret;
+}
+
+
+static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
+ unsigned long end)
+{
+ int ret = 0;
+ pud_t *pud = pud_offset(pgd, addr);
+
+ while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
+ WARN_ON(!pud_none(*pud));
+ set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
+ | _KERNPG_TABLE));
+ addr += PUD_SIZE;
+ pud = pud_offset(pgd, addr);
+ }
+
+ if (addr < end) {
+ if (pud_none(*pud)) {
+ void *p = vmemmap_alloc_block(PAGE_SIZE, 0);
+ if (!p)
+ return -ENOMEM;
+ set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE));
+ }
+ ret = zero_pmd_populate(pud, addr, end);
+ }
+ return ret;
+}
+
+static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
+{
+ int ret = 0;
+ pgd_t *pgd = pgd_offset_k(addr);
+
+ while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
+ WARN_ON(!pgd_none(*pgd));
+ set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
+ | _KERNPG_TABLE));
+ addr += PGDIR_SIZE;
+ pgd = pgd_offset_k(addr);
+ }
+
+ if (addr < end) {
+ if (pgd_none(*pgd)) {
+ void *p = vmemmap_alloc_block(PAGE_SIZE, 0);
+ if (!p)
+ return -ENOMEM;
+ set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE));
+ }
+ ret = zero_pud_populate(pgd, addr, end);
+ }
+ return ret;
+}
+
+
+static void __init populate_zero_shadow(const void *start, const void *end)
+{
+ if (zero_pgd_populate((unsigned long)start, (unsigned long)end))
+ panic("kasan: unable to map zero shadow!");
+}
+
+
+#ifdef CONFIG_KASAN_INLINE
+static int kasan_die_handler(struct notifier_block *self,
+ unsigned long val,
+ void *data)
+{
+ if (val == DIE_GPF) {
+ pr_emerg("CONFIG_KASAN_INLINE enabled");
+ pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block kasan_die_notifier = {
+ .notifier_call = kasan_die_handler,
+};
+#endif
+
+void __init kasan_early_init(void)
+{
+ int i;
+ pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL;
+ pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE;
+ pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ kasan_zero_pte[i] = __pte(pte_val);
+
+ for (i = 0; i < PTRS_PER_PMD; i++)
+ kasan_zero_pmd[i] = __pmd(pmd_val);
+
+ for (i = 0; i < PTRS_PER_PUD; i++)
+ kasan_zero_pud[i] = __pud(pud_val);
+
+ kasan_map_early_shadow(early_level4_pgt);
+ kasan_map_early_shadow(init_level4_pgt);
+}
+
+void __init kasan_init(void)
+{
+ int i;
+
+#ifdef CONFIG_KASAN_INLINE
+ register_die_notifier(&kasan_die_notifier);
+#endif
+
+ memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
+ load_cr3(early_level4_pgt);
+ __flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ populate_zero_shadow((void *)KASAN_SHADOW_START,
+ kasan_mem_to_shadow((void *)PAGE_OFFSET));
+
+ for (i = 0; i < E820_X_MAX; i++) {
+ if (pfn_mapped[i].end == 0)
+ break;
+
+ if (map_range(&pfn_mapped[i]))
+ panic("kasan: unable to allocate shadow!");
+ }
+ populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
+ kasan_mem_to_shadow((void *)__START_KERNEL_map));
+
+ vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
+ (unsigned long)kasan_mem_to_shadow(_end),
+ 0);
+
+ populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
+ (void *)KASAN_SHADOW_END);
+
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+
+ load_cr3(init_level4_pgt);
+ __flush_tlb_all();
+ init_task.kasan_depth = 0;
+
+ pr_info("Kernel address sanitizer initialized\n");
+}
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
# for more details.
#
#
-
+KASAN_SANITIZE := n
subdir- := rm

obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 9cac825..53c6b65 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
# for more details.
#
#
+KASAN_SANITIZE := n

always := realmode.bin realmode.relocs

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index fd14be1..d211772 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -183,6 +183,7 @@ quiet_cmd_vdso = VDSO $@

VDSO_LDFLAGS = -fPIC -shared $(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
GCOV_PROFILE := n
+KASAN_SANITIZE := n

#
# Install the unstripped copy of vdso*.so listed in $(vdso-install-y).
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 88e85cb..7ae5a8b 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -23,7 +23,6 @@
#include <linux/aer.h>

MODULE_VERSION(DRV_VER);
-MODULE_DEVICE_TABLE(pci, be_dev_ids);
MODULE_DESCRIPTION(DRV_DESC " " DRV_VER);
MODULE_AUTHOR("Emulex Corporation");
MODULE_LICENSE("GPL");
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index a683a83..4300fd2 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -48,7 +48,6 @@ static unsigned int be_iopoll_budget = 10;
static unsigned int be_max_phys_size = 64;
static unsigned int enable_msix = 1;

-MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table);
MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR);
MODULE_VERSION(BUILD_STR);
MODULE_AUTHOR("Emulex Corporation");
diff --git a/fs/dcache.c b/fs/dcache.c
index 17222fa..1d04914 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -35,11 +35,13 @@
#include <linux/hardirq.h>
#include <linux/bit_spinlock.h>
#include <linux/rculist_bl.h>
+#include <linux/kasan.h>
#include <linux/prefetch.h>
#include <linux/ratelimit.h>
#include "internal.h"
#include "mount.h"

+
/*
* Usage:
* dcache->d_inode->i_lock protects:
@@ -1263,6 +1265,11 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
kmem_cache_free(dentry_cache, dentry);
return NULL;
}
+ if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
+ kasan_unpoison_shadow(dname,
+ round_up(name->len + 1,
+ sizeof(unsigned long)));
+
} else {
dname = dentry->d_iname;
}
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index 953cd121..082c0ec 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -66,6 +66,7 @@
#define __deprecated __attribute__((deprecated))
#define __packed __attribute__((packed))
#define __weak __attribute__((weak))
+#define __alias(symbol) __attribute__((alias(#symbol)))

/*
* it doesn't make sense on ARM (currently the only user of __naked) to trace
diff --git a/include/linux/compiler-gcc3.h b/include/linux/compiler-gcc3.h
new file mode 100644
index 0000000..e69de29
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 998f4df..b71688b 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -155,6 +155,13 @@ extern struct task_group root_task_group;

#define INIT_TASK_COMM "swapper"

+#ifdef CONFIG_KASAN
+# define INIT_KASAN(tsk) \
+ .kasan_depth = 1,
+#else
+# define INIT_KASAN(tsk)
+#endif
+
/*
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -224,6 +231,7 @@ extern struct task_group root_task_group;
INIT_TASK_RCU_PREEMPT(tsk) \
INIT_CPUSET_SEQ \
INIT_VTIME(tsk) \
+ INIT_KASAN(tsk) \
}


diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
new file mode 100644
index 0000000..b7f8ace
--- /dev/null
+++ b/include/linux/kasan-checks.h
@@ -0,0 +1,12 @@
+#ifndef _LINUX_KASAN_CHECKS_H
+#define _LINUX_KASAN_CHECKS_H
+
+#ifdef CONFIG_KASAN
+void kasan_check_read(const void *p, unsigned int size);
+void kasan_check_write(const void *p, unsigned int size);
+#else
+static inline void kasan_check_read(const void *p, unsigned int size) { }
+static inline void kasan_check_write(const void *p, unsigned int size) { }
+#endif
+
+#endif
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..5bb0744
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,86 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+struct vm_struct;
+
+#ifdef CONFIG_KASAN
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+#include <asm/kasan.h>
+#include <linux/sched.h>
+
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+ return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+}
+
+/* Enable reporting bugs after kasan_disable_current() */
+static inline void kasan_enable_current(void)
+{
+ current->kasan_depth++;
+}
+
+/* Disable reporting bugs for current task */
+static inline void kasan_disable_current(void)
+{
+ current->kasan_depth--;
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size);
+
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
+void kasan_poison_slab(struct page *page);
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
+void kasan_poison_object_data(struct kmem_cache *cache, void *object);
+
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+int kasan_module_alloc(void *addr, size_t size);
+void kasan_free_shadow(const struct vm_struct *vm);
+
+#else /* CONFIG_KASAN */
+
+static inline void kasan_unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_current(void) {}
+static inline void kasan_disable_current(void) {}
+
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
+static inline void kasan_poison_slab(struct page *page) {}
+static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+ void *object) {}
+static inline void kasan_poison_object_data(struct kmem_cache *cache,
+ void *object) {}
+
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/module.h b/include/linux/module.h
index 761dc28..f0b8750 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -84,7 +84,7 @@ void trim_init_extable(struct module *m);

#ifdef MODULE
#define MODULE_GENERIC_TABLE(gtype,name) \
-extern const struct gtype##_id __mod_##gtype##_table \
+extern const typeof(name) __mod_##gtype##_table \
__attribute__ ((unused, alias(__stringify(name))))

#else /* !MODULE */
diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h
index 560ca53..8405769 100644
--- a/include/linux/moduleloader.h
+++ b/include/linux/moduleloader.h
@@ -80,4 +80,11 @@ int module_finalize(const Elf_Ehdr *hdr,
/* Any cleanup needed when module leaves. */
void module_arch_cleanup(struct module *mod);

+#ifdef CONFIG_KASAN
+#include <linux/kasan.h>
+#define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
+#else
+#define MODULE_ALIGN PAGE_SIZE
+#endif
+
#endif
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4781332..843e99d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1383,6 +1383,9 @@ struct task_struct {
unsigned long timer_slack_ns;
unsigned long default_timer_slack_ns;

+#ifdef CONFIG_KASAN
+ unsigned int kasan_depth;
+#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* Index of current stored address in ret_stack */
int curr_ret_stack;
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 027276f..d8fde5d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -208,4 +208,23 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
}
#endif

+
+/**
+ * virt_to_obj - returns address of the beginning of object.
+ * @s: object's kmem_cache
+ * @slab_page: address of slab page
+ * @x: address within object memory range
+ *
+ * Returns address of the beginning of object
+ */
+static inline void *virt_to_obj(struct kmem_cache *s,
+ const void *slab_page,
+ const void *x)
+{
+ return (void *)x - ((x - slab_page) % s->size);
+}
+
+void object_err(struct kmem_cache *s, struct page *page,
+ u8 *object, char *reason);
+
#endif /* _LINUX_SLUB_DEF_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 7d5773a..638c673 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,8 @@ struct vm_area_struct; /* vma defining user mapping in mm_types.h */
#define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */
#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */
#define VM_UNLIST 0x00000020 /* vm_struct is not listed in vmlist */
+#define VM_NO_GUARD 0x00000040 /* don't add guard page */
+#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
/* bits [20..32] reserved for arch specific ioremap internals */

/*
@@ -75,7 +77,9 @@ extern void *vmalloc_32_user(unsigned long size);
extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller);
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller);
+
extern void vfree(const void *addr);

extern void *vmap(struct page **pages, unsigned int count,
@@ -92,8 +96,12 @@ void vmalloc_sync_all(void);

static inline size_t get_vm_area_size(const struct vm_struct *area)
{
- /* return actual size without guard page */
- return area->size - PAGE_SIZE;
+ if (!(area->flags & VM_NO_GUARD))
+ /* return actual size without guard page */
+ return area->size - PAGE_SIZE;
+ else
+ return area->size;
+
}

extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 74fdc5c..818bf1c 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1481,6 +1481,8 @@ source "lib/Kconfig.kgdb"

source "lib/Kconfig.kmemcheck"

+source "lib/Kconfig.kasan"
+
config TEST_STRING_HELPERS
tristate "Test functions located in the string_helpers module at runtime"

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..4fecaedc
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,54 @@
+config HAVE_ARCH_KASAN
+ bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+ bool "KASan: runtime memory debugger"
+ depends on SLUB_DEBUG
+ select CONSTRUCTORS
+ help
+ Enables kernel address sanitizer - runtime memory debugger,
+ designed to find out-of-bounds accesses and use-after-free bugs.
+ This is strictly debugging feature. It consumes about 1/8
+ of available memory and brings about ~x3 performance slowdown.
+ For better error detection enable CONFIG_STACKTRACE,
+ and add slub_debug=U to boot cmdline.
+
+config KASAN_SHADOW_OFFSET
+ hex
+ default 0xdffffc0000000000 if X86_64
+
+choice
+ prompt "Instrumentation type"
+ depends on KASAN
+ default KASAN_OUTLINE
+
+config KASAN_OUTLINE
+ bool "Outline instrumentation"
+ help
+ Before every memory access compiler insert function call
+ __asan_load*/__asan_store*. These functions performs check
+ of shadow memory. This is slower than inline instrumentation,
+ however it doesn't bloat size of kernel's .text section so
+ much as inline does.
+
+config KASAN_INLINE
+ bool "Inline instrumentation"
+ help
+ Compiler directly inserts code checking shadow memory before
+ memory accesses. This is faster than outline (in some workloads
+ it gives about x2 boost over outline instrumentation), but
+ make kernel's .text size much bigger.
+
+endchoice
+
+config TEST_KASAN
+ tristate "Module for testing kasan for bug detection"
+ depends on m && KASAN
+ help
+ This is a test module doing various nasty things like
+ out of bounds accesses, use after free. It is useful for testing
+ kernel debugging features like kernel address sanitizer.
+
+endif
diff --git a/lib/Makefile b/lib/Makefile
index 9efe480..8bd332a 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -29,6 +29,7 @@ obj-y += string_helpers.o
obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o
obj-y += kstrtox.o
obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
+obj-$(CONFIG_TEST_KASAN) += test_kasan.o

ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/idr.c b/lib/idr.c
index a3bfde8..dd06ccb 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -618,26 +618,27 @@ void __idr_remove_all(struct idr *idp)
struct idr_layer **paa = &pa[0];

n = idp->layers * IDR_BITS;
- p = idp->top;
+ *paa = idp->top;
rcu_assign_pointer(idp->top, NULL);
max = idr_max(idp->layers);

id = 0;
while (id >= 0 && id <= max) {
+ p = *paa;
while (n > IDR_BITS && p) {
n -= IDR_BITS;
- *paa++ = p;
p = p->ary[(id >> n) & IDR_MASK];
+ *++paa = p;
}

bt_mask = id;
id += 1 << n;
/* Get the highest bit that the above add changed from 0->1. */
while (n < fls(id ^ bt_mask)) {
- if (p)
- free_layer(idp, p);
+ if (*paa)
+ free_layer(idp, *paa);
n += IDR_BITS;
- p = *--paa;
+ --paa;
}
}
idp->layers = 0;
@@ -721,15 +722,16 @@ int idr_for_each(struct idr *idp,
struct idr_layer **paa = &pa[0];

n = idp->layers * IDR_BITS;
- p = rcu_dereference_raw(idp->top);
+ *paa = rcu_dereference_raw(idp->top);
max = idr_max(idp->layers);

id = 0;
while (id >= 0 && id <= max) {
+ p = *paa;
while (n > 0 && p) {
n -= IDR_BITS;
- *paa++ = p;
p = rcu_dereference_raw(p->ary[(id >> n) & IDR_MASK]);
+ *++paa = p;
}

if (p) {
@@ -741,7 +743,7 @@ int idr_for_each(struct idr *idp,
id += 1 << n;
while (n < fls(id)) {
n += IDR_BITS;
- p = *--paa;
+ --paa;
}
}

@@ -769,17 +771,18 @@ void *idr_get_next(struct idr *idp, int *nextidp)
int n, max;

/* find first ent */
- p = rcu_dereference_raw(idp->top);
+ p = *paa = rcu_dereference_raw(idp->top);
if (!p)
return NULL;
n = (p->layer + 1) * IDR_BITS;
max = idr_max(p->layer + 1);

while (id >= 0 && id <= max) {
+ p = *paa;
while (n > 0 && p) {
n -= IDR_BITS;
- *paa++ = p;
p = rcu_dereference_raw(p->ary[(id >> n) & IDR_MASK]);
+ *++paa = p;
}

if (p) {
@@ -797,7 +800,7 @@ void *idr_get_next(struct idr *idp, int *nextidp)
id = round_up(id + 1, 1 << n);
while (n < fls(id)) {
n += IDR_BITS;
- p = *--paa;
+ --paa;
}
}
return NULL;
diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
index bb2b201..b5e2ad8 100644
--- a/lib/strncpy_from_user.c
+++ b/lib/strncpy_from_user.c
@@ -1,5 +1,6 @@
#include <linux/module.h>
#include <linux/uaccess.h>
+#include <linux/kasan-checks.h>
#include <linux/kernel.h>
#include <linux/errno.h>

@@ -106,6 +107,7 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
src_addr = (unsigned long)src;
if (likely(src_addr < max_addr)) {
unsigned long max = max_addr - src_addr;
+ kasan_check_write(dst, count);
return do_strncpy_from_user(dst, src, count, max);
}
return -EFAULT;
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
new file mode 100644
index 0000000..1740caf
--- /dev/null
+++ b/lib/test_kasan.c
@@ -0,0 +1,277 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin at samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+static noinline void __init kmalloc_oob_right(void)
+{
+ char *ptr;
+ size_t size = 123;
+
+ pr_info("out-of-bounds to right\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 'x';
+ kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_left(void)
+{
+ char *ptr;
+ size_t size = 15;
+
+ pr_info("out-of-bounds to left\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ *ptr = *(ptr - 1);
+ kfree(ptr);
+}
+
+static noinline void __init kmalloc_node_oob_right(void)
+{
+ char *ptr;
+ size_t size = 4096;
+
+ pr_info("kmalloc_node(): out-of-bounds to right\n");
+ ptr = kmalloc_node(size, GFP_KERNEL, 0);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+static noinline void __init kmalloc_large_oob_rigth(void)
+{
+ char *ptr;
+ size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
+
+ pr_info("kmalloc large allocation: out-of-bounds to right\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+static noinline void __init kmalloc_oob_krealloc_more(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 19;
+
+ pr_info("out-of-bounds after krealloc more\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+
+ ptr2[size2] = 'x';
+ kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_krealloc_less(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 15;
+
+ pr_info("out-of-bounds after krealloc less\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+ ptr2[size1] = 'x';
+ kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_16(void)
+{
+ struct {
+ u64 words[2];
+ } *ptr1, *ptr2;
+
+ pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+ ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+ ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ kfree(ptr2);
+ return;
+ }
+ *ptr1 = *ptr2;
+ kfree(ptr1);
+ kfree(ptr2);
+}
+
+static noinline void __init kmalloc_oob_in_memset(void)
+{
+ char *ptr;
+ size_t size = 666;
+
+ pr_info("out-of-bounds in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ memset(ptr, 0, size+5);
+ kfree(ptr);
+}
+
+static noinline void __init kmalloc_uaf(void)
+{
+ char *ptr;
+ size_t size = 10;
+
+ pr_info("use-after-free\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ *(ptr + 8) = 'x';
+}
+
+static noinline void __init kmalloc_uaf_memset(void)
+{
+ char *ptr;
+ size_t size = 33;
+
+ pr_info("use-after-free in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ memset(ptr, 0, size);
+}
+
+static noinline void __init kmalloc_uaf2(void)
+{
+ char *ptr1, *ptr2;
+ size_t size = 43;
+
+ pr_info("use-after-free after another kmalloc\n");
+ ptr1 = kmalloc(size, GFP_KERNEL);
+ if (!ptr1) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr1);
+ ptr2 = kmalloc(size, GFP_KERNEL);
+ if (!ptr2) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr1[40] = 'x';
+ kfree(ptr2);
+}
+
+static noinline void __init kmem_cache_oob(void)
+{
+ char *p;
+ size_t size = 200;
+ struct kmem_cache *cache = kmem_cache_create("test_cache",
+ size, 0,
+ 0, NULL);
+ if (!cache) {
+ pr_err("Cache allocation failed\n");
+ return;
+ }
+ pr_info("out-of-bounds in kmem_cache_alloc\n");
+ p = kmem_cache_alloc(cache, GFP_KERNEL);
+ if (!p) {
+ pr_err("Allocation failed\n");
+ kmem_cache_destroy(cache);
+ return;
+ }
+
+ *p = p[size];
+ kmem_cache_free(cache, p);
+ kmem_cache_destroy(cache);
+}
+
+static char global_array[10];
+
+static noinline void __init kasan_global_oob(void)
+{
+ volatile int i = 3;
+ char *p = &global_array[ARRAY_SIZE(global_array) + i];
+
+ pr_info("out-of-bounds global variable\n");
+ *(volatile char *)p;
+}
+
+static noinline void __init kasan_stack_oob(void)
+{
+ char stack_array[10];
+ volatile int i = 0;
+ char *p = &stack_array[ARRAY_SIZE(stack_array) + i];
+
+ pr_info("out-of-bounds on stack\n");
+ *(volatile char *)p;
+}
+
+static int __init kmalloc_tests_init(void)
+{
+ kmalloc_oob_right();
+ kmalloc_oob_left();
+ kmalloc_node_oob_right();
+ kmalloc_large_oob_rigth();
+ kmalloc_oob_krealloc_more();
+ kmalloc_oob_krealloc_less();
+ kmalloc_oob_16();
+ kmalloc_oob_in_memset();
+ kmalloc_uaf();
+ kmalloc_uaf_memset();
+ kmalloc_uaf2();
+ kmem_cache_oob();
+ kasan_stack_oob();
+ kasan_global_oob();
+ return -EAGAIN;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
diff --git a/mm/Makefile b/mm/Makefile
index 72c5acb..a4c706a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,9 @@
# Makefile for the linux memory manager.
#

+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
+
mmu-y := nommu.o
mmu-$(CONFIG_MMU) := fremap.o highmem.o madvise.o memory.o mincore.o \
mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -44,6 +47,7 @@ obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
obj-$(CONFIG_SLAB) += slab.o
obj-$(CONFIG_SLUB) += slub.o
obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
+obj-$(CONFIG_KASAN) += kasan/
obj-$(CONFIG_FAILSLAB) += failslab.o
obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-$(CONFIG_FS_XIP) += filemap_xip.o
diff --git a/mm/compaction.c b/mm/compaction.c
index eeaaa92..ea4a300 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -16,6 +16,7 @@
#include <linux/sysfs.h>
#include <linux/balloon_compaction.h>
#include <linux/page-isolation.h>
+#include <linux/kasan.h>
#include "internal.h"

#ifdef CONFIG_COMPACTION
@@ -59,6 +60,7 @@ static void map_pages(struct list_head *list)
list_for_each_entry(page, list, lru) {
arch_alloc_page(page, 0);
kernel_map_pages(page, 1, 1);
+ kasan_alloc_pages(page, 0);
}
}

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..bd837b8
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,8 @@
+KASAN_SANITIZE := n
+
+CFLAGS_REMOVE_kasan.o = -pg
+# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..a9f91cc
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,532 @@
+/*
+ * This file contains shadow memory manipulation code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin at samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ * Andrey Konovalov <adech.fo at gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/vmalloc.h>
+#include <linux/kasan.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+ void *shadow_start, *shadow_end;
+
+ shadow_start = kasan_mem_to_shadow(address);
+ shadow_end = kasan_mem_to_shadow(address + size);
+
+ memset(shadow_start, value, shadow_end - shadow_start);
+}
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+ kasan_poison_shadow(address, size, 0);
+
+ if (size & KASAN_SHADOW_MASK) {
+ u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+ *shadow = size & KASAN_SHADOW_MASK;
+ }
+}
+
+
+/*
+ * All functions below always inlined so compiler could
+ * perform better optimizations in each of __asan_loadX/__assn_storeX
+ * depending on memory access size X.
+ */
+
+static __always_inline bool memory_is_poisoned_1(unsigned long addr)
+{
+ s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(shadow_value)) {
+ s8 last_accessible_byte = addr & KASAN_SHADOW_MASK;
+ return unlikely(last_accessible_byte >= shadow_value);
+ }
+
+ return false;
+}
+
+static __always_inline bool memory_is_poisoned_2(unsigned long addr)
+{
+ u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(*shadow_addr)) {
+ if (memory_is_poisoned_1(addr + 1))
+ return true;
+
+ if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+ return false;
+
+ return unlikely(*(u8 *)shadow_addr);
+ }
+
+ return false;
+}
+
+static __always_inline bool memory_is_poisoned_4(unsigned long addr)
+{
+ u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(*shadow_addr)) {
+ if (memory_is_poisoned_1(addr + 3))
+ return true;
+
+ if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+ return false;
+
+ return unlikely(*(u8 *)shadow_addr);
+ }
+
+ return false;
+}
+
+static __always_inline bool memory_is_poisoned_8(unsigned long addr)
+{
+ u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(*shadow_addr)) {
+ if (memory_is_poisoned_1(addr + 7))
+ return true;
+
+ if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7))
+ return false;
+
+ return unlikely(*(u8 *)shadow_addr);
+ }
+
+ return false;
+}
+
+static __always_inline bool memory_is_poisoned_16(unsigned long addr)
+{
+ u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
+
+ if (unlikely(*shadow_addr)) {
+ u16 shadow_first_bytes = *(u16 *)shadow_addr;
+ s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK;
+
+ if (unlikely(shadow_first_bytes))
+ return true;
+
+ if (likely(!last_byte))
+ return false;
+
+ return memory_is_poisoned_1(addr + 15);
+ }
+
+ return false;
+}
+
+static __always_inline unsigned long bytes_is_zero(const u8 *start,
+ size_t size)
+{
+ while (size) {
+ if (unlikely(*start))
+ return (unsigned long)start;
+ start++;
+ size--;
+ }
+
+ return 0;
+}
+
+static __always_inline unsigned long memory_is_zero(const void *start,
+ const void *end)
+{
+ unsigned int words;
+ unsigned long ret;
+ unsigned int prefix = (unsigned long)start % 8;
+
+ if (end - start <= 16)
+ return bytes_is_zero(start, end - start);
+
+ if (prefix) {
+ prefix = 8 - prefix;
+ ret = bytes_is_zero(start, prefix);
+ if (unlikely(ret))
+ return ret;
+ start += prefix;
+ }
+
+ words = (end - start) / 8;
+ while (words) {
+ if (unlikely(*(u64 *)start))
+ return bytes_is_zero(start, 8);
+ start += 8;
+ words--;
+ }
+
+ return bytes_is_zero(start, (end - start) % 8);
+}
+
+static __always_inline bool memory_is_poisoned_n(unsigned long addr,
+ size_t size)
+{
+ unsigned long ret;
+
+ ret = memory_is_zero(kasan_mem_to_shadow((void *)addr),
+ kasan_mem_to_shadow((void *)addr + size - 1) + 1);
+
+ if (unlikely(ret)) {
+ unsigned long last_byte = addr + size - 1;
+ s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte);
+
+ if (unlikely(ret != (unsigned long)last_shadow ||
+ ((last_byte & KASAN_SHADOW_MASK) >= *last_shadow)))
+ return true;
+ }
+ return false;
+}
+
+static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size)
+{
+ if (__builtin_constant_p(size)) {
+ switch (size) {
+ case 1:
+ return memory_is_poisoned_1(addr);
+ case 2:
+ return memory_is_poisoned_2(addr);
+ case 4:
+ return memory_is_poisoned_4(addr);
+ case 8:
+ return memory_is_poisoned_8(addr);
+ case 16:
+ return memory_is_poisoned_16(addr);
+ default:
+ BUILD_BUG();
+ }
+ }
+
+ return memory_is_poisoned_n(addr, size);
+}
+
+static __always_inline void check_memory_region_inline(unsigned long addr,
+ size_t size, bool write,
+ unsigned long ret_ip)
+{
+ if (unlikely(size == 0))
+ return;
+
+ if (unlikely((void *)addr <
+ kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) {
+ kasan_report(addr, size, write, ret_ip);
+ return;
+ }
+
+ if (likely(!memory_is_poisoned(addr, size)))
+ return;
+
+ kasan_report(addr, size, write, ret_ip);
+}
+
+static void check_memory_region(unsigned long addr,
+ size_t size, bool write,
+ unsigned long ret_ip)
+{
+ check_memory_region_inline(addr, size, write, ret_ip);
+}
+
+void kasan_check_read(const void *p, unsigned int size)
+{
+ check_memory_region((unsigned long)p, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_read);
+
+void kasan_check_write(const void *p, unsigned int size)
+{
+ check_memory_region((unsigned long)p, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_write);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+ check_memory_region((unsigned long)addr, len, true, _RET_IP_);
+
+ return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+ check_memory_region((unsigned long)src, len, false, _RET_IP_);
+ check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+ return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+ check_memory_region((unsigned long)src, len, false, _RET_IP_);
+ check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+ return __memcpy(dest, src, len);
+}
+
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+ if (likely(!PageHighMem(page)))
+ kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+ if (likely(!PageHighMem(page)))
+ kasan_poison_shadow(page_address(page),
+ PAGE_SIZE << order,
+ KASAN_FREE_PAGE);
+}
+
+void kasan_poison_slab(struct page *page)
+{
+ kasan_poison_shadow(page_address(page),
+ PAGE_SIZE << compound_order(page),
+ KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+ kasan_unpoison_shadow(object, cache->object_size);
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+ kasan_poison_shadow(object,
+ round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+ KASAN_KMALLOC_REDZONE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+ kasan_kmalloc(cache, object, cache->object_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+ unsigned long size = cache->object_size;
+ unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+ /* RCU slabs could be legally used after free within the RCU period */
+ if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
+ return;
+
+ kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(object == NULL))
+ return;
+
+ redzone_start = round_up((unsigned long)(object + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = round_up((unsigned long)object + cache->object_size,
+ KASAN_SHADOW_SCALE_SIZE);
+
+ kasan_unpoison_shadow(object, size);
+ kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_KMALLOC_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+ struct page *page;
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(ptr == NULL))
+ return;
+
+ page = virt_to_page(ptr);
+ redzone_start = round_up((unsigned long)(ptr + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+ kasan_unpoison_shadow(ptr, size);
+ kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_PAGE_REDZONE);
+}
+
+void kasan_krealloc(const void *object, size_t size)
+{
+ struct page *page;
+
+ if (unlikely(object == ZERO_SIZE_PTR))
+ return;
+
+ page = virt_to_head_page(object);
+
+ if (unlikely(!PageSlab(page)))
+ kasan_kmalloc_large(object, size);
+ else
+ kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+ struct page *page = virt_to_page(ptr);
+
+ kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+ KASAN_FREE_PAGE);
+}
+
+int kasan_module_alloc(void *addr, size_t size)
+{
+ void *ret;
+ size_t shadow_size;
+ unsigned long shadow_start;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+ shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+ PAGE_SIZE);
+
+
+ ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+ shadow_start + shadow_size,
+ GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO,
+ PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+ __builtin_return_address(0));
+
+ if (ret) {
+ find_vm_area(addr)->flags |= VM_KASAN;
+ return 0;
+ }
+
+ return -ENOMEM;
+}
+
+void kasan_free_shadow(const struct vm_struct *vm)
+{
+ if (vm->flags & VM_KASAN)
+ vfree(kasan_mem_to_shadow(vm->addr));
+}
+
+static void register_global(struct kasan_global *global)
+{
+ size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
+
+ kasan_unpoison_shadow(global->beg, global->size);
+
+ kasan_poison_shadow(global->beg + aligned_size,
+ global->size_with_redzone - aligned_size,
+ KASAN_GLOBAL_REDZONE);
+}
+
+void __asan_register_globals(struct kasan_global *globals, size_t size)
+{
+ int i;
+
+ for (i = 0; i < size; i++)
+ register_global(&globals[i]);
+}
+EXPORT_SYMBOL(__asan_register_globals);
+
+void __asan_unregister_globals(struct kasan_global *globals, size_t size)
+{
+}
+EXPORT_SYMBOL(__asan_unregister_globals);
+
+#define DEFINE_ASAN_LOAD_STORE(size) \
+ void __asan_load##size(unsigned long addr) \
+ { \
+ check_memory_region_inline(addr, size, false, _RET_IP_);\
+ } \
+ EXPORT_SYMBOL(__asan_load##size); \
+ __alias(__asan_load##size) \
+ void __asan_load##size##_noabort(unsigned long); \
+ EXPORT_SYMBOL(__asan_load##size##_noabort); \
+ void __asan_store##size(unsigned long addr) \
+ { \
+ check_memory_region_inline(addr, size, true, _RET_IP_); \
+ } \
+ EXPORT_SYMBOL(__asan_store##size); \
+ __alias(__asan_store##size) \
+ void __asan_store##size##_noabort(unsigned long); \
+ EXPORT_SYMBOL(__asan_store##size##_noabort)
+
+DEFINE_ASAN_LOAD_STORE(1);
+DEFINE_ASAN_LOAD_STORE(2);
+DEFINE_ASAN_LOAD_STORE(4);
+DEFINE_ASAN_LOAD_STORE(8);
+DEFINE_ASAN_LOAD_STORE(16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+__alias(__asan_loadN)
+void __asan_loadN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_loadN_noabort);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+__alias(__asan_storeN)
+void __asan_storeN_noabort(unsigned long, size_t);
+EXPORT_SYMBOL(__asan_storeN_noabort);
+
+/* to shut up compiler complaints */
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+static int kasan_mem_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ return (action == MEM_GOING_ONLINE) ? NOTIFY_BAD : NOTIFY_OK;
+}
+
+static int __init kasan_memhotplug_init(void)
+{
+ pr_err("WARNING: KASan doesn't support memory hot-add\n");
+ pr_err("Memory hot-add will be disabled\n");
+
+ hotplug_memory_notifier(kasan_mem_notifier, 0);
+
+ return 0;
+}
+
+module_init(kasan_memhotplug_init);
+#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..14cdff3
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,72 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#include <linux/kasan.h>
+
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
+#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */
+
+/*
+ * Stack redzone shadow values
+ * (Those are compiler's ABI, don't change them)
+ */
+#define KASAN_STACK_LEFT 0xF1
+#define KASAN_STACK_MID 0xF2
+#define KASAN_STACK_RIGHT 0xF3
+#define KASAN_STACK_PARTIAL 0xF4
+
+/* Don't break randconfig/all*config builds */
+#ifndef KASAN_ABI_VERSION
+#define KASAN_ABI_VERSION 1
+#endif
+
+struct kasan_access_info {
+ const void *access_addr;
+ const void *first_bad_addr;
+ size_t access_size;
+ bool is_write;
+ unsigned long ip;
+};
+
+/* The layout of struct dictated by compiler */
+struct kasan_source_location {
+ const char *filename;
+ int line_no;
+ int column_no;
+};
+
+/* The layout of struct dictated by compiler */
+struct kasan_global {
+ const void *beg; /* Address of the beginning of the global variable. */
+ size_t size; /* Size of the global variable. */
+ size_t size_with_redzone; /* Size of the variable + size of the red zone. 32 bytes aligned */
+ const void *name;
+ const void *module_name; /* Name of the module where the global variable is declared. */
+ unsigned long has_dynamic_init; /* This needed for C++ */
+#if KASAN_ABI_VERSION >= 4
+ struct kasan_source_location *location;
+#endif
+};
+
+static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+ return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
+ << KASAN_SHADOW_SCALE_SHIFT);
+}
+
+static inline bool kasan_enabled(void)
+{
+ return !current->kasan_depth;
+}
+
+void kasan_report(unsigned long addr, size_t size,
+ bool is_write, unsigned long ip);
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..66ec459
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,285 @@
+/*
+ * This file contains error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <a.ryabinin at samsung.com>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ * Andrey Konovalov <adech.fo at gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+
+#include <asm/sections.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 2
+
+static const void *find_first_bad_addr(const void *addr, size_t size)
+{
+ u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+ const void *first_bad_addr = addr;
+
+ while (!shadow_val && first_bad_addr < addr + size) {
+ first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+ shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+ }
+ return first_bad_addr;
+}
+
+static void print_error_description(struct kasan_access_info *info)
+{
+ const char *bug_type = "unknown crash";
+ u8 shadow_val;
+
+ info->first_bad_addr = find_first_bad_addr(info->access_addr,
+ info->access_size);
+
+ shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+ switch (shadow_val) {
+ case KASAN_FREE_PAGE:
+ case KASAN_KMALLOC_FREE:
+ bug_type = "use after free";
+ break;
+ case KASAN_PAGE_REDZONE:
+ case KASAN_KMALLOC_REDZONE:
+ case KASAN_GLOBAL_REDZONE:
+ case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+ bug_type = "out of bounds access";
+ break;
+ case KASAN_STACK_LEFT:
+ case KASAN_STACK_MID:
+ case KASAN_STACK_RIGHT:
+ case KASAN_STACK_PARTIAL:
+ bug_type = "out of bounds on stack";
+ break;
+ }
+
+ pr_err("BUG: KASan: %s in %pS at addr %p\n",
+ bug_type, (void *)info->ip,
+ info->access_addr);
+ pr_err("%s of size %zu by task %s/%d\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->comm, task_pid_nr(current));
+}
+
+static inline bool kernel_or_module_addr(const void *addr)
+{
+ return (addr >= (void *)_stext && addr < (void *)_end)
+ || (addr >= (void *)MODULES_VADDR
+ && addr < (void *)MODULES_END);
+}
+
+static inline bool init_task_stack_addr(const void *addr)
+{
+ return addr >= (void *)&init_thread_union.stack &&
+ (addr <= (void *)&init_thread_union.stack +
+ sizeof(init_thread_union.stack));
+}
+
+static void print_address_description(struct kasan_access_info *info)
+{
+ const void *addr = info->access_addr;
+
+ if ((addr >= (void *)PAGE_OFFSET) &&
+ (addr < high_memory)) {
+ struct page *page = virt_to_head_page(addr);
+
+ if (PageSlab(page)) {
+ void *object;
+ struct kmem_cache *cache = page->slab_cache;
+ void *last_object;
+
+ object = virt_to_obj(cache, page_address(page), addr);
+ last_object = page_address(page) +
+ page->objects * cache->size;
+
+ if (unlikely(object > last_object))
+ object = last_object; /* we hit into padding */
+
+ object_err(cache, page, object,
+ "kasan: bad access detected");
+ return;
+ }
+ dump_page(page);
+ }
+
+ if (kernel_or_module_addr(addr)) {
+ if (!init_task_stack_addr(addr))
+ pr_err("Address belongs to variable %pS\n", addr);
+ }
+
+ dump_stack();
+}
+
+static bool row_is_guilty(const void *row, const void *guilty)
+{
+ return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static int shadow_pointer_offset(const void *row, const void *shadow)
+{
+ /* The length of ">ff00ff00ff00ff00: " is
+ * 3 + (BITS_PER_LONG/8)*2 chars.
+ */
+ return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 +
+ (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1;
+}
+
+static void print_shadow_for_address(const void *addr)
+{
+ int i;
+ const void *shadow = kasan_mem_to_shadow(addr);
+ const void *shadow_row;
+
+ shadow_row = (void *)round_down((unsigned long)shadow,
+ SHADOW_BYTES_PER_ROW)
+ - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+ pr_err("Memory state around the buggy address:\n");
+
+ for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+ const void *kaddr = kasan_shadow_to_mem(shadow_row);
+ char buffer[4 + (BITS_PER_LONG/8)*2];
+
+ snprintf(buffer, sizeof(buffer),
+ (i == 0) ? ">%p: " : " %p: ", kaddr);
+
+ kasan_disable_current();
+ print_hex_dump(KERN_ERR, buffer,
+ DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+ shadow_row, SHADOW_BYTES_PER_ROW, 0);
+ kasan_enable_current();
+
+ if (row_is_guilty(shadow_row, shadow))
+ pr_err("%*c\n",
+ shadow_pointer_offset(shadow_row, shadow),
+ '^');
+
+ shadow_row += SHADOW_BYTES_PER_ROW;
+ }
+}
+
+static DEFINE_SPINLOCK(report_lock);
+
+static void kasan_report_error(struct kasan_access_info *info)
+{
+ unsigned long flags;
+ const char *bug_type;
+
+ spin_lock_irqsave(&report_lock, flags);
+ pr_err("================================="
+ "=================================\n");
+ if (info->access_addr <
+ kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) {
+ if ((unsigned long)info->access_addr < PAGE_SIZE)
+ bug_type = "null-ptr-deref";
+ else if ((unsigned long)info->access_addr < TASK_SIZE)
+ bug_type = "user-memory-access";
+ else
+ bug_type = "wild-memory-access";
+ pr_err("BUG: KASan: %s on address %p\n",
+ bug_type, info->access_addr);
+ pr_err("%s of size %zu by task %s/%d\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->comm,
+ task_pid_nr(current));
+ dump_stack();
+ } else {
+ print_error_description(info);
+ print_address_description(info);
+ print_shadow_for_address(info->first_bad_addr);
+ }
+ pr_err("================================="
+ "=================================\n");
+ add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+ spin_unlock_irqrestore(&report_lock, flags);
+}
+
+static bool print_till_death;
+static int __init kasan_setup(char *arg)
+{
+ print_till_death = true;
+ return 0;
+}
+__setup("kasan_print_till_death", kasan_setup);
+
+void kasan_report(unsigned long addr, size_t size,
+ bool is_write, unsigned long ip)
+{
+ struct kasan_access_info info;
+ static bool reported = false;
+
+ if (likely(!kasan_enabled()))
+ return;
+
+ if (likely(!print_till_death)) {
+ if (reported)
+ return;
+ reported = true;
+ }
+ info.access_addr = (void *)addr;
+ info.access_size = size;
+ info.is_write = is_write;
+ info.ip = ip;
+
+ kasan_report_error(&info);
+}
+
+
+#define DEFINE_ASAN_REPORT_LOAD(size) \
+void __asan_report_load##size##_noabort(unsigned long addr) \
+{ \
+ kasan_report(addr, size, false, _RET_IP_); \
+} \
+EXPORT_SYMBOL(__asan_report_load##size##_noabort)
+
+#define DEFINE_ASAN_REPORT_STORE(size) \
+void __asan_report_store##size##_noabort(unsigned long addr) \
+{ \
+ kasan_report(addr, size, true, _RET_IP_); \
+} \
+EXPORT_SYMBOL(__asan_report_store##size##_noabort)
+
+DEFINE_ASAN_REPORT_LOAD(1);
+DEFINE_ASAN_REPORT_LOAD(2);
+DEFINE_ASAN_REPORT_LOAD(4);
+DEFINE_ASAN_REPORT_LOAD(8);
+DEFINE_ASAN_REPORT_LOAD(16);
+DEFINE_ASAN_REPORT_STORE(1);
+DEFINE_ASAN_REPORT_STORE(2);
+DEFINE_ASAN_REPORT_STORE(4);
+DEFINE_ASAN_REPORT_STORE(8);
+DEFINE_ASAN_REPORT_STORE(16);
+
+void __asan_report_load_n_noabort(unsigned long addr, size_t size)
+{
+ kasan_report(addr, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_load_n_noabort);
+
+void __asan_report_store_n_noabort(unsigned long addr, size_t size)
+{
+ kasan_report(addr, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(__asan_report_store_n_noabort);
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index c8d7f31..7dca1ef 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -98,6 +98,7 @@
#include <asm/processor.h>
#include <linux/atomic.h>

+#include <linux/kasan.h>
#include <linux/kmemcheck.h>
#include <linux/kmemleak.h>
#include <linux/memory_hotplug.h>
@@ -1076,7 +1077,10 @@ static bool update_checksum(struct kmemleak_object *object)
if (!kmemcheck_is_obj_initialized(object->pointer, object->size))
return false;

+ kasan_disable_current();
object->checksum = crc32(0, (void *)object->pointer, object->size);
+ kasan_enable_current();
+
return object->checksum != old_csum;
}

@@ -1127,7 +1131,9 @@ static void scan_block(void *_start, void *_end,
BYTES_PER_POINTER))
continue;

+ kasan_disable_current();
pointer = *ptr;
+ kasan_enable_current();

object = find_and_get_object(pointer, 1);
if (!object)
diff --git a/mm/mempool.c b/mm/mempool.c
index 5499047..db146ad 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -6,25 +6,115 @@
* extreme VM load.
*
* started by Ingo Molnar, Copyright (C) 2001
+ * debugging by David Rientjes, Copyright (C) 2015
*/

#include <linux/mm.h>
#include <linux/slab.h>
+
+#include <linux/highmem.h>
+#include <linux/kmemleak.h>
#include <linux/export.h>
#include <linux/mempool.h>
#include <linux/blkdev.h>
#include <linux/writeback.h>

+#if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB_DEBUG_ON)
+static void poison_error(mempool_t *pool, void *element, size_t size,
+ size_t byte)
+{
+ const int nr = pool->curr_nr;
+ const int start = max_t(int, byte - (BITS_PER_LONG / 8), 0);
+ const int end = min_t(int, byte + (BITS_PER_LONG / 8), size);
+ int i;
+
+ pr_err("BUG: mempool element poison mismatch\n");
+ pr_err("Mempool %p size %zu\n", pool, size);
+ pr_err(" nr=%d @ %p: %s0x", nr, element, start > 0 ? "... " : "");
+ for (i = start; i < end; i++)
+ pr_cont("%x ", *(u8 *)(element + i));
+ pr_cont("%s\n", end < size ? "..." : "");
+ dump_stack();
+}
+
+static void __check_element(mempool_t *pool, void *element, size_t size)
+{
+ u8 *obj = element;
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ u8 exp = (i < size - 1) ? POISON_FREE : POISON_END;
+
+ if (obj[i] != exp) {
+ poison_error(pool, element, size, i);
+ return;
+ }
+ }
+ memset(obj, POISON_INUSE, size);
+}
+
+static void check_element(mempool_t *pool, void *element)
+{
+ /* Mempools backed by slab allocator */
+ if (pool->free == mempool_free_slab || pool->free == mempool_kfree)
+ __check_element(pool, element, ksize(element));
+
+ /* Mempools backed by page allocator */
+ if (pool->free == mempool_free_pages) {
+ int order = (int)(long)pool->pool_data;
+ void *addr = kmap_atomic((struct page *)element);
+
+ __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
+ kunmap_atomic(addr);
+ }
+}
+
+static void __poison_element(void *element, size_t size)
+{
+ u8 *obj = element;
+
+ memset(obj, POISON_FREE, size - 1);
+ obj[size - 1] = POISON_END;
+}
+
+static void poison_element(mempool_t *pool, void *element)
+{
+ /* Mempools backed by slab allocator */
+ if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
+ __poison_element(element, ksize(element));
+
+ /* Mempools backed by page allocator */
+ if (pool->alloc == mempool_alloc_pages) {
+ int order = (int)(long)pool->pool_data;
+ void *addr = kmap_atomic((struct page *)element);
+
+ __poison_element(addr, 1UL << (PAGE_SHIFT + order));
+ kunmap_atomic(addr);
+ }
+}
+#else /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */
+static inline void check_element(mempool_t *pool, void *element)
+{
+}
+static inline void poison_element(mempool_t *pool, void *element)
+{
+}
+#endif /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */
+
static void add_element(mempool_t *pool, void *element)
{
BUG_ON(pool->curr_nr >= pool->min_nr);
+ poison_element(pool, element);
pool->elements[pool->curr_nr++] = element;
}

static void *remove_element(mempool_t *pool)
{
- BUG_ON(pool->curr_nr <= 0);
- return pool->elements[--pool->curr_nr];
+ void *element = pool->elements[--pool->curr_nr];
+
+ BUG_ON(pool->curr_nr < 0);
+ check_element(pool, element);
+ return element;
}

/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 494a081..872e8cc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -25,6 +25,7 @@
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/kmemcheck.h>
+#include <linux/kasan.h>
#include <linux/module.h>
#include <linux/suspend.h>
#include <linux/pagevec.h>
@@ -706,6 +707,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)

trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
+ kasan_free_pages(page, order);

if (PageAnon(page))
page->mapping = NULL;
@@ -871,6 +873,7 @@ static int prep_new_page(struct page *page, int order, gfp_t gfp_flags)

arch_alloc_page(page, order);
kernel_map_pages(page, 1 << order, 1);
+ kasan_alloc_pages(page, order);

if (gfp_flags & __GFP_ZERO)
prep_zero_page(page, order, gfp_flags);
diff --git a/mm/slab.c b/mm/slab.c
index bd88411..dd8f844 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3637,6 +3637,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)

trace_kmalloc(_RET_IP_, ret,
size, cachep->size, flags);
+ kasan_kmalloc(cachep, ret, size);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -3668,6 +3669,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
trace_kmalloc_node(_RET_IP_, ret,
size, cachep->size,
flags, nodeid);
+ kasan_kmalloc(cachep, ret, size);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
diff --git a/mm/slab.h b/mm/slab.h
index 4d6d836..887eb4c 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -227,7 +227,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
* to not do even the assignment. In that case, slab_equal_or_root
* will also be a constant.
*/
- if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
+ if (!unlikely(s->flags & SLAB_DEBUG_FREE))
return s;

page = virt_to_head_page(x);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 7d21d3f..dd1847f 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -20,6 +20,10 @@
#include <asm/page.h>
#include <linux/memcontrol.h>

+#define CREATE_TRACE_POINTS
+#include <trace/events/kmem.h>
+
+#include <linux/kasan.h>
#include "slab.h"

enum slab_state slab_state;
@@ -640,3 +644,104 @@ static int __init slab_proc_init(void)
}
module_init(slab_proc_init);
#endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+ gfp_t flags)
+{
+ void *ret;
+ size_t ks = 0;
+
+ if (p)
+ ks = ksize(p);
+
+ if (ks >= new_size) {
+ kasan_krealloc((void *)p, new_size);
+ return (void *)p;
+ }
+
+ ret = kmalloc_track_caller(new_size, flags);
+ if (ret && p)
+ memcpy(ret, p, ks);
+
+ return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ if (unlikely(!new_size))
+ return ZERO_SIZE_PTR;
+
+ return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes. If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ void *ret;
+
+ if (unlikely(!new_size)) {
+ kfree(p);
+ return ZERO_SIZE_PTR;
+ }
+
+ ret = __do_krealloc(p, new_size, flags);
+ if (ret && p != ret)
+ kfree(p);
+
+ return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+ size_t ks;
+ void *mem = (void *)p;
+
+ if (unlikely(ZERO_OR_NULL_PTR(mem)))
+ return;
+ ks = ksize(mem);
+ memset(mem, 0, ks);
+ kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
+
+/* Tracepoints definitions. */
+EXPORT_TRACEPOINT_SYMBOL(kmalloc);
+EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
+EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
+EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
+EXPORT_TRACEPOINT_SYMBOL(kfree);
+EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);
diff --git a/mm/slub.c b/mm/slub.c
index deaed7b..1320ba2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -20,6 +20,7 @@
#include <linux/proc_fs.h>
#include <linux/notifier.h>
#include <linux/seq_file.h>
+#include <linux/kasan.h>
#include <linux/kmemcheck.h>
#include <linux/cpu.h>
#include <linux/cpuset.h>
@@ -444,6 +445,8 @@ static void get_map(struct kmem_cache *s, struct page *page, unsigned long *map)
*/
#ifdef CONFIG_SLUB_DEBUG_ON
static int slub_debug = DEBUG_DEFAULT_FLAGS;
+#elif defined (CONFIG_KASAN)
+static int slub_debug = SLAB_STORE_USER;
#else
static int slub_debug;
#endif
@@ -452,12 +455,30 @@ static char *slub_debug_slabs;
static int disable_higher_order_debug;

/*
+ * slub is about to manipulate internal object metadata. This memory lies
+ * outside the range of the allocated object, so accessing it would normally
+ * be reported by kasan as a bounds error. metadata_access_enable() is used
+ * to tell kasan that these accesses are OK.
+ */
+static inline void metadata_access_enable(void)
+{
+ kasan_disable_current();
+}
+
+static inline void metadata_access_disable(void)
+{
+ kasan_enable_current();
+}
+
+/*
* Object debugging
*/
static void print_section(char *text, u8 *addr, unsigned int length)
{
+ metadata_access_enable();
print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr,
length, 1);
+ metadata_access_disable();
}

static struct track *get_track(struct kmem_cache *s, void *object,
@@ -487,7 +508,9 @@ static void set_track(struct kmem_cache *s, void *object,
trace.max_entries = TRACK_ADDRS_COUNT;
trace.entries = p->addrs;
trace.skip = 3;
+ metadata_access_enable();
save_stack_trace(&trace);
+ metadata_access_disable();

/* See rant in lockdep.c */
if (trace.nr_entries != 0 &&
@@ -613,7 +636,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
dump_stack();
}

-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason)
{
slab_bug(s, "%s", reason);
@@ -660,7 +683,9 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
u8 *fault;
u8 *end;

+ metadata_access_enable();
fault = memchr_inv(start, value, bytes);
+ metadata_access_disable();
if (!fault)
return 1;

@@ -753,7 +778,9 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page)
if (!remainder)
return 1;

+ metadata_access_enable();
fault = memchr_inv(end - remainder, POISON_INUSE, remainder);
+ metadata_access_disable();
if (!fault)
return 1;
while (end > fault && end[-1] == POISON_INUSE)
@@ -933,6 +960,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, void
flags &= gfp_allowed_mask;
kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+ kasan_slab_alloc(s, object);
}

static inline void slab_free_hook(struct kmem_cache *s, void *x)
@@ -956,6 +984,8 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
#endif
if (!(s->flags & SLAB_DEBUG_OBJECTS))
debug_check_no_obj_freed(x, s->object_size);
+
+ kasan_slab_free(s, x);
}

/*
@@ -1336,8 +1366,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
void *object)
{
setup_object_debug(s, page, object);
- if (unlikely(s->ctor))
+ if (unlikely(s->ctor)) {
+ kasan_unpoison_object_data(s, object);
s->ctor(object);
+ kasan_poison_object_data(s, object);
+ }
}

static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1368,6 +1401,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
if (unlikely(s->flags & SLAB_POISON))
memset(start, POISON_INUSE, PAGE_SIZE << order);

+ kasan_poison_slab(page);
+
last = start;
for_each_object(p, s, start, page->objects) {
setup_object(s, page, last);
@@ -2417,6 +2452,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
{
void *ret = slab_alloc(s, gfpflags, _RET_IP_);
trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
+ kasan_kmalloc(s, ret, size);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2451,6 +2487,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,

trace_kmalloc_node(_RET_IP_, ret,
size, s->size, gfpflags, node);
+
+ kasan_kmalloc(s, ret, size);
return ret;
}
EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -2838,6 +2876,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
+ kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
init_kmem_cache_node(n);
inc_slabs_node(kmem_cache_node, node, page->objects);

@@ -3235,6 +3274,8 @@ void *__kmalloc(size_t size, gfp_t flags)

trace_kmalloc(_RET_IP_, ret, size, s->size, flags);

+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc);
@@ -3251,6 +3292,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
ptr = page_address(page);

kmemleak_alloc(ptr, size, 1, flags);
+ kasan_kmalloc_large(ptr, size);
return ptr;
}

@@ -3278,12 +3320,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)

trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);

+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc_node);
#endif

-size_t ksize(const void *object)
+static size_t __ksize(const void *object)
{
struct page *page;

@@ -3299,6 +3343,15 @@ size_t ksize(const void *object)

return slab_ksize(page->slab_cache);
}
+
+size_t ksize(const void *object)
+{
+ size_t size = __ksize(object);
+ /* We assume that ksize callers could use whole allocated area,
+ so we need unpoison this area. */
+ kasan_krealloc(object, size);
+ return size;
+}
EXPORT_SYMBOL(ksize);

#ifdef CONFIG_SLUB_DEBUG
@@ -3351,6 +3404,7 @@ void kfree(const void *x)
if (unlikely(!PageSlab(page))) {
BUG_ON(!PageCompound(page));
kmemleak_free(x);
+ kasan_kfree_large(x);
__free_memcg_kmem_pages(page, compound_order(page));
return;
}
diff --git a/mm/util.c b/mm/util.c
index 0b17252..4b3c88b 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -11,9 +11,6 @@

#include "internal.h"

-#define CREATE_TRACE_POINTS
-#include <trace/events/kmem.h>
-
/**
* kstrdup - allocate space for and copy an existing string
* @s: the string to duplicate
@@ -107,97 +104,6 @@ void *memdup_user(const void __user *src, size_t len)
}
EXPORT_SYMBOL(memdup_user);

-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
- gfp_t flags)
-{
- void *ret;
- size_t ks = 0;
-
- if (p)
- ks = ksize(p);
-
- if (ks >= new_size)
- return (void *)p;
-
- ret = kmalloc_track_caller(new_size, flags);
- if (ret && p)
- memcpy(ret, p, ks);
-
- return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- if (unlikely(!new_size))
- return ZERO_SIZE_PTR;
-
- return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes. If @p is %NULL, krealloc()
- * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- void *ret;
-
- if (unlikely(!new_size)) {
- kfree(p);
- return ZERO_SIZE_PTR;
- }
-
- ret = __do_krealloc(p, new_size, flags);
- if (ret && p != ret)
- kfree(p);
-
- return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
- size_t ks;
- void *mem = (void *)p;
-
- if (unlikely(ZERO_OR_NULL_PTR(mem)))
- return;
- ks = ksize(mem);
- memset(mem, 0, ks);
- kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
/*
* strndup_user - duplicate an existing string from user space
* @s: The string to duplicate
@@ -400,9 +306,3 @@ struct address_space *page_mapping(struct page *page)
}

/* Tracepoints definitions. */
-EXPORT_TRACEPOINT_SYMBOL(kmalloc);
-EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);
-EXPORT_TRACEPOINT_SYMBOL(kmalloc_node);
-EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node);
-EXPORT_TRACEPOINT_SYMBOL(kfree);
-EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d456560..0c2e8f8 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -31,7 +31,7 @@
#include <asm/uaccess.h>
#include <asm/tlbflush.h>
#include <asm/shmparam.h>
-
+#include <linux/kasan.h>
struct vfree_deferred {
struct llist_head list;
struct work_struct wq;
@@ -1285,7 +1285,7 @@ void unmap_kernel_range(unsigned long addr, unsigned long size)
int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages)
{
unsigned long addr = (unsigned long)area->addr;
- unsigned long end = addr + area->size - PAGE_SIZE;
+ unsigned long end = addr + get_vm_area_size(area);
int err;

err = vmap_page_range(addr, end, prot, *pages);
@@ -1356,10 +1356,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
if (unlikely(!area))
return NULL;

- /*
- * We always allocate a guard page.
- */
- size += PAGE_SIZE;
+ if (!(flags & VM_NO_GUARD))
+ size += PAGE_SIZE;

va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
if (IS_ERR(va)) {
@@ -1461,6 +1459,7 @@ struct vm_struct *remove_vm_area(const void *addr)
spin_unlock(&vmap_area_lock);

vmap_debug_free_range(va->va_start, va->va_end);
+ kasan_free_shadow(vm);
free_unmap_vmap_area(va);
vm->size -= PAGE_SIZE;

@@ -1606,7 +1605,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
unsigned int nr_pages, array_size, i;
gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;

- nr_pages = (area->size - PAGE_SIZE) >> PAGE_SHIFT;
+ nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
array_size = (nr_pages * sizeof(struct page *));

area->nr_pages = nr_pages;
@@ -1663,6 +1662,7 @@ fail:
* @end: vm area range end
* @gfp_mask: flags for the page level allocator
* @prot: protection mask for the allocated pages
+ * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD)
* @node: node to use for allocation or NUMA_NO_NODE
* @caller: caller's return address
*
@@ -1672,7 +1672,8 @@ fail:
*/
void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, const void *caller)
+ pgprot_t prot, unsigned long vm_flags, int node,
+ const void *caller)
{
struct vm_struct *area;
void *addr;
@@ -1682,8 +1683,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!size || (size >> PAGE_SHIFT) > totalram_pages)
goto fail;

- area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNLIST,
- start, end, node, gfp_mask, caller);
+ area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNLIST |
+ vm_flags, start, end, node, gfp_mask, caller);
if (!area)
goto fail;

@@ -1732,7 +1733,7 @@ static void *__vmalloc_node(unsigned long size, unsigned long align,
int node, const void *caller)
{
return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
- gfp_mask, prot, node, caller);
+ gfp_mask, prot, 0, node, caller);
}

void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
@@ -2038,7 +2039,7 @@ long vread(char *buf, char *addr, unsigned long count)

vm = va->vm;
vaddr = (char *) vm->addr;
- if (addr >= vaddr + vm->size - PAGE_SIZE)
+ if (addr >= vaddr + get_vm_area_size(vm))
continue;
while (addr < vaddr) {
if (count == 0)
@@ -2048,7 +2049,7 @@ long vread(char *buf, char *addr, unsigned long count)
addr++;
count--;
}
- n = vaddr + vm->size - PAGE_SIZE - addr;
+ n = vaddr + get_vm_area_size(vm) - addr;
if (n > count)
n = count;
if (!(vm->flags & VM_IOREMAP))
@@ -2120,7 +2121,7 @@ long vwrite(char *buf, char *addr, unsigned long count)

vm = va->vm;
vaddr = (char *) vm->addr;
- if (addr >= vaddr + vm->size - PAGE_SIZE)
+ if (addr >= vaddr + get_vm_area_size(vm))
continue;
while (addr < vaddr) {
if (count == 0)
@@ -2129,7 +2130,7 @@ long vwrite(char *buf, char *addr, unsigned long count)
addr++;
count--;
}
- n = vaddr + vm->size - PAGE_SIZE - addr;
+ n = vaddr + get_vm_area_size(vm) - addr;
if (n > count)
n = count;
if (!(vm->flags & VM_IOREMAP)) {
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
new file mode 100644
index 0000000..3f874d2
--- /dev/null
+++ b/scripts/Makefile.kasan
@@ -0,0 +1,29 @@
+ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_INLINE
+ call_threshold := 10000
+else
+ call_threshold := 0
+endif
+
+CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
+
+CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
+ -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+ --param asan-stack=1 --param asan-globals=1 \
+ --param asan-instrumentation-with-call-threshold=$(call_threshold))
+
+ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),)
+ ifneq ($(CONFIG_COMPILE_TEST),y)
+ $(warning Cannot use CONFIG_KASAN: \
+ -fsanitize=kernel-address is not supported by compiler)
+ endif
+else
+ ifeq ($(CFLAGS_KASAN),)
+ ifneq ($(CONFIG_COMPILE_TEST),y)
+ $(warning CONFIG_KASAN: compiler does not support all options.\
+ Trying minimal configuration)
+ endif
+ CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+ endif
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index f97869f..b188171 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
$(CFLAGS_GCOV))
endif

+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+ $(CFLAGS_KASAN))
+endif
+
# If building the kernel in a separate objtree expand all occurrences
# of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').

diff --git a/scripts/module-common.lds b/scripts/module-common.lds
index 0865b3e..10fa8bf 100644
--- a/scripts/module-common.lds
+++ b/scripts/module-common.lds
@@ -16,4 +16,8 @@ SECTIONS {
__kcrctab_unused : { *(SORT(___kcrctab_unused+*)) }
__kcrctab_unused_gpl : { *(SORT(___kcrctab_unused_gpl+*)) }
__kcrctab_gpl_future : { *(SORT(___kcrctab_gpl_future+*)) }
+
+
+ . = ALIGN(8);
+ .init_array 0 : { *(SORT(.init_array.*)) *(.init_array) }
}