Re: KCSAN: data-race in path_lookupat / vfs_rename

From: Al Viro

Date: Wed Apr 08 2026 - 19:00:54 EST


On Wed, Mar 11, 2026 at 10:54:28AM +0800, Jianzhou Zhao wrote:

> Concurrently, a separate thread traversing the filesystem performs a path lookup and inspects the same intermediate dentry (the rename victim) within `step_into()`. It tests if the dentry needs mount-point translation by calling `d_managed()`:
> ```c
> 412: static inline bool d_managed(const struct dentry *dentry)
> 413: {
> 414: return dentry->d_flags & DCACHE_MANAGED_DENTRY; // <--- Lockless Read
> 415: }
> ```
>
> ### Root Cause Analysis
>
> The data race is triggered because `d_managed()` executes a plain read of `dentry->d_flags` without holding `d_lock` (or using `READ_ONCE`), while `dont_mount()` concurrently modifies `dentry->d_flags` using a plain read-modify-write operation (`|= DCACHE_CANT_MOUNT`).
>
> Although `dont_mount()` properly protects the write within a `spin_lock`, the lockless reader is oblivious to it. KCSAN identifies this as a data race because the plain read in `d_managed()` can overlap with the unlocked portion of the compiler's emitted store sequence in another CPU.

Who cares? d_managed() does not look at the bit in question, so what's
the point of READ_ONCE()? Split 32bit operations or not, if

thread A:
lock
x |= 1;
unlock

thread B:
return x & 2;

manage to depend upon the order of execution, compiler in question
needs to be tarred, feathered and shoved up the place where its
authors heads are. Sure, theoretically A could be compiled into
lock
x &= 1;
x ^= 1;
unlock
but who the hell would do something like that?