KCSAN: data-race in data_push_tail / symbol_string
From: Jianzhou Zhao
Date: Wed Mar 11 2026 - 03:49:20 EST
Subject: [BUG] printk: KCSAN: data-race in data_push_tail / symbol_string
Dear Maintainers,
We are writing to report a KCSAN-detected data-race vulnerability in the Linux kernel. This bug was found by our custom fuzzing tool, RacePilot. The bug occurs during ringbuffer tail advancement where a reader speculatively reads the `blk->id` from a physical address that has concurrently been overwritten by a writer formatting a string. We observed this on the Linux kernel version 6.18.0-08691-g2061f18ad76e-dirty.
Call Trace & Context
==================================================================
BUG: KCSAN: data-race in data_push_tail.part.0 / symbol_string
write to 0xffffffff88f194a8 of 1 bytes by task 38579 on cpu 0:
string_nocheck lib/vsprintf.c:658 [inline]
symbol_string+0x129/0x2c0 lib/vsprintf.c:1020
pointer+0x24c/0x920 lib/vsprintf.c:2565
vsnprintf+0x5d0/0xb80 lib/vsprintf.c:2982
vscnprintf+0x41/0x90 lib/vsprintf.c:3042
printk_sprint+0x31/0x1c0 kernel/printk/printk.c:2199
vprintk_store+0x3f6/0x980 kernel/printk/printk.c:2321
vprintk_emit+0xfd/0x540 kernel/printk/printk.c:2412
vprintk_default+0x26/0x30 kernel/printk/printk.c:2451
vprintk+0x1d/0x30 kernel/printk/printk_safe.c:82
_printk+0x63/0x90 kernel/printk/printk.c:2461
printk_stack_address arch/x86/kernel/dumpstack.c:70 [inline]
__show_trace_log_lvl+0x1ed/0x370 arch/x86/kernel/dumpstack.c:282
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0xb0/0xe0 lib/dump_stack.c:120
dump_stack+0x15/0x20 lib/dump_stack.c:129
fail_dump lib/fault-inject.c:73 [inline]
should_fail_ex+0x26b/0x280 lib/fault-inject.c:174
should_fail_alloc_page+0x108/0x130 mm/fail_page_alloc.c:44
prepare_alloc_pages mm/page_alloc.c:4953 [inline]
__alloc_frozen_pages_noprof+0x24d/0x1120 mm/page_alloc.c:5172
alloc_pages_mpol+0x90/0x280 mm/mempolicy.c:2416
folio_alloc_mpol_noprof mm/mempolicy.c:2435 [inline]
vma_alloc_folio_noprof+0xa0/0x170 mm/mempolicy.c:2470
folio_prealloc mm/memory.c:1207 [inline]
alloc_anon_folio mm/memory.c:5175 [inline]
do_anonymous_page mm/memory.c:5232 [inline]
do_pte_missing mm/memory.c:4402 [inline]
handle_pte_fault mm/memory.c:6287 [inline]
__handle_mm_fault+0xf1d/0x21f0 mm/memory.c:6421
handle_mm_fault+0x2ee/0x820 mm/memory.c:6590
do_user_addr_fault arch/x86/mm/fault.c:1336 [inline]
handle_page_fault arch/x86/mm/fault.c:1476 [inline]
exc_page_fault+0x398/0x10d0 arch/x86/mm/fault.c:1532
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
read to 0xffffffff88f194a8 of 8 bytes by task 38521 on cpu 1:
data_make_reusable kernel/printk/printk_ringbuffer.c:606 [inline]
data_push_tail.part.0+0xe6/0x350 kernel/printk/printk_ringbuffer.c:692
data_push_tail kernel/printk/printk_ringbuffer.c:656 [inline]
data_alloc+0x157/0x330 kernel/printk/printk_ringbuffer.c:1096
prb_reserve+0x44d/0x7d0 kernel/printk/printk_ringbuffer.c:1742
vprintk_store+0x3b4/0x980 kernel/printk/printk.c:2311
vprintk_emit+0xfd/0x540 kernel/printk/printk.c:2412
vprintk_default+0x26/0x30 kernel/printk/printk.c:2451
vprintk+0x1d/0x30 kernel/printk/printk_safe.c:82
_printk+0x63/0x90 kernel/printk/printk.c:2461
_fat_msg+0x91/0xc0 fs/fat/misc.c:62
__fat_fs_error+0x185/0x1b0 fs/fat/misc.c:31
fat_bmap_cluster fs/fat/cache.c:303 [inline]
fat_get_mapped_cluster+0x255/0x260 fs/fat/cache.c:320
fat_bmap+0x14c/0x280 fs/fat/cache.c:384
__fat_get_block fs/fat/inode.c:132 [inline]
fat_get_block+0xa3/0x550 fs/fat/inode.c:194
block_read_full_folio+0x17b/0x480 fs/buffer.c:2461
do_mpage_readpage+0x1bf/0xc80 fs/mpage.c:314
mpage_read_folio+0xc5/0x130 fs/mpage.c:395
fat_read_folio+0x1c/0x30 fs/fat/inode.c:209
filemap_read_folio+0x2b/0x160 mm/filemap.c:2523
filemap_update_page mm/filemap.c:2610 [inline]
filemap_get_pages+0xcef/0x10a0 mm/filemap.c:2741
filemap_read+0x248/0x7e0 mm/filemap.c:2828
generic_file_read_iter+0x1d0/0x240 mm/filemap.c:3019
__kernel_read+0x33a/0x6a0 fs/read_write.c:541
kernel_read+0xb0/0x180 fs/read_write.c:559
prepare_binprm fs/exec.c:1608 [inline]
search_binary_handler fs/exec.c:1655 [inline]
exec_binprm fs/exec.c:1701 [inline]
bprm_execve fs/exec.c:1753 [inline]
bprm_execve+0x510/0xae0 fs/exec.c:1729
do_execveat_common.isra.0+0x2bd/0x3f0 fs/exec.c:1859
do_execveat fs/exec.c:1944 [inline]
__do_sys_execveat fs/exec.c:2018 [inline]
__se_sys_execveat fs/exec.c:2012 [inline]
__x64_sys_execveat+0x78/0x90 fs/exec.c:2012
x64_sys_call+0x1ee4/0x2030 arch/x86/include/generated/asm/syscalls_64.h:323
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xae/0x2c0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
value changed: 0x00000000fffff47a -> 0x302f303978302b6c
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 38521 Comm: syz.7.1998 Not tainted 6.18.0-08691-g2061f18ad76e-dirty #42 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
==================================================================
Execution Flow & Code Context
On CPU 0, a printing task formats an output string containing a symbol address through `vsprintf.c` which recursively formats data and writes to a buffer natively byte-by-byte:
```c
// lib/vsprintf.c
static char *string_nocheck(char *buf, char *end, const char *s,
struct printf_spec spec)
{
...
while (lim--) {
char c = *s++;
...
if (buf < end)
*buf = c; // <-- Plain Write
++buf;
...
}
return widen_string(buf, len, end, spec);
}
```
This destination buffer represents the text block inside the physical `printk_ringbuffer` array, historically mapped out by `data_alloc()`. Concurrently, CPU 1 calls `prb_reserve()` advancing `data_make_reusable()` along the same space to check if it's safe to clear descriptors. The reader uses `blk->id` unannotated to see if a particular logical block was recycled:
```c
// kernel/printk/printk_ringbuffer.c
static bool data_make_reusable(struct printk_ringbuffer *rb, ...)
{
...
while (need_more_space(data_ring, lpos_begin, lpos_end)) {
blk = to_block(data_ring, lpos_begin);
/*
* Load the block ID from the data block. This is a data race
* against a writer that may have newly reserved this data
* area. If the loaded value matches a valid descriptor ID,
...
*/
id = blk->id; /* LMM(data_make_reusable:A) */ // <-- Plain Lockless Read
...
```
Root Cause Analysis
A data race occurs because the reader speculatively accesses `blk->id` using a plain memory access (`id = blk->id`). However, because another concurrent task (`CPU 0`) running `vsprintf` has already pushed the logical boundaries on this data array and is linearly formatting strings onto this exact overlapping physical memory region block, `CPU 1` reads data undergoing character writes. This is an intentional heuristic documented by the comment: "This is a data race against a writer that may have newly reserved this data area". Reading garbage here is gracefully handled out-of-band by mapping the `sys_desc` ring ID and concluding it mismatching. However, it still trips compiler sanitizer checks.
Unfortunately, we were unable to generate a reproducer for this bug.
Potential Impact
This data race is functionally benign. If `data_make_reusable` reads formatted text characters rather than a proper `unsigned long id`, it safely skips it and verifies limits via `blk_lpos` logic. However, tripping the KCSAN sanitizer adds unnecessary debugging noise and may hide actual vulnerabilities under prolonged workloads.
Proposed Fix
To silence the compiler sanitizer and explicitly annotate to the memory model that this deliberate racing behavior is expected, `data_race()` macro should wrap the read on `blk->id`.
```diff
--- a/kernel/printk/printk_ringbuffer.c
+++ b/kernel/printk/printk_ringbuffer.c
@@ -616,7 +616,7 @@ static bool data_make_reusable(struct printk_ringbuffer *rb,
* sure it points back to this data block. If the check fails,
* the data area has been recycled by another writer.
*/
- id = blk->id; /* LMM(data_make_reusable:A) */
+ id = data_race(blk->id); /* LMM(data_make_reusable:A) */
d_state = desc_read(desc_ring, id, &desc, NULL,
NULL); /* LMM(data_make_reusable:B) */
```
We would be highly honored if this could be of any help.
Best regards,
RacePilot Team