[BUG] Possible circular locking dependency in spidev (buf_lock <-> spi_lock)
From: 홍유진 (YJ Hong)
Date: Fri Feb 27 2026 - 22:38:44 EST
Title: Telechips Email Notification
|
Hello,
I am seeing a lockdep warning in spidev reporting a possible circular
locking dependency between buf_lock and spi_lock.
Kernel:
6.12.23
Lockdep warning:
[ 60.977069] WARNING: possible circular locking dependency detected
[ 60.996180] spidev_test_tel/256 is trying to acquire lock:
[ 61.001656] ffff0001a123fa78 (&spidev->spi_lock){+.+.}-{3:3}, at: spidev_sync+0x2c/0x70
[ 61.009674]
[ 61.015495] but task is already holding lock:
[ 61.015495] ffff0001a123fb30 (&spidev->buf_lock){+.+.}-{3:3}, at: spidev_write+0x44/0x13c
[ 61.023679]
[ 61.039314] the existing dependency chain (in reverse order) is:
[ 61.039314]
[ 61.039314] -> #1 (&spidev->buf_lock){+.+.}-{3:3}:
[ 61.054114] spidev_ioctl+0x80/0x718
[ 61.058205] __arm64_sys_ioctl+0x94/0xd8
...
[ 61.094760] -> #0 (&spidev->spi_lock){+.+.}-{3:3}:
[ 61.111988] spidev_sync+0x2c/0x70
[ 61.120342] spidev_write+0x104/0x13c
>From inspection of the locking paths:
Path 1 (write/read):
spidev_write()
mutex_lock(&spidev->buf_lock)
-> spidev_sync()
mutex_lock(&spidev->spi_lock)
Lock order: buf_lock -> spi_lock
Path 2 (ioctl):
spidev_ioctl()
mutex_lock(&spidev->spi_lock)
mutex_lock(&spidev->buf_lock)
Lock order: spi_lock -> buf_lock
This results in an apparent lock ordering inversion:
buf_lock -> spi_lock
spi_lock -> buf_lock
Lockdep reports this as a possible circular dependency,
with the scenario:
CPU0: lock(buf_lock) -> lock(spi_lock)
CPU1: lock(spi_lock) -> lock(buf_lock)
I intentionally added an artificial delay in spidev_write()
before copy_from_user() to enlarge the timing window. Then I executed concurrent operations on the same device (spidev0.0): Thread A: write() Thread B: ioctl() This reliably results in an actual deadlock: CPU0: lock(buf_lock) -> waiting for spi_lock CPU1: lock(spi_lock) -> waiting for buf_lock The system becomes permanently blocked.
My questions:
1. Is this locking order intentional and considered safe?
2. Is this a known lockdep false positive?
Thanks,
Yujin Hong
This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Telechips Inc. If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. Would you consider the environment before printing this email?
|
[ 60.985575] 6.12.23-telechips+ #56 Not tainted
[ 60.990011] ------------------------------------------------------
[ 60.996180] spidev_test_tel/256 is trying to acquire lock:
[ 61.001656] ffff0001a123fa78 (&spidev->spi_lock){+.+.}-{3:3}, at: spidev_sync+0x2c/0x70
[ 61.009674]
[ 61.009674] but task is already holding lock:
[ 61.015495] ffff0001a123fb30 (&spidev->buf_lock){+.+.}-{3:3}, at: spidev_write+0x44/0x13c
[ 61.023679]
[ 61.023679] which lock already depends on the new lock.
[ 61.023679]
[ 61.031843]
[ 61.031843] the existing dependency chain (in reverse order) is:
[ 61.039314]
[ 61.039314] -> #1 (&spidev->buf_lock){+.+.}-{3:3}:
[ 61.045582] __mutex_lock+0xa4/0xfc0
[ 61.049676] mutex_lock_nested+0x24/0x30
[ 61.054114] spidev_ioctl+0x80/0x718
[ 61.058205] __arm64_sys_ioctl+0x94/0xd8
[ 61.062644] invoke_syscall+0x44/0x100
[ 61.066912] el0_svc_common.constprop.0+0x40/0xe0
[ 61.072131] do_el0_svc+0x1c/0x28
[ 61.075961] el0_svc+0x48/0x114
[ 61.079617] el0t_64_sync_handler+0xc0/0xc4
[ 61.084314] el0t_64_sync+0x190/0x194
[ 61.088491]
[ 61.088491] -> #0 (&spidev->spi_lock){+.+.}-{3:3}:
[ 61.094760] __lock_acquire+0x1394/0x1cc8
[ 61.099284] lock_acquire+0x11c/0x324
[ 61.103460] __mutex_lock+0xa4/0xfc0
[ 61.107550] mutex_lock_nested+0x24/0x30
[ 61.111988] spidev_sync+0x2c/0x70
[ 61.115905] spidev_sync_write+0x9c/0xc0
[ 61.120342] spidev_write+0x104/0x13c
[ 61.124520] vfs_write+0xd4/0x5a4
[ 61.128351] ksys_write+0xe0/0xf8
[ 61.132181] __arm64_sys_write+0x1c/0x28
[ 61.136619] invoke_syscall+0x44/0x100
[ 61.140883] el0_svc_common.constprop.0+0x40/0xe0
[ 61.146102] do_el0_svc+0x1c/0x28
[ 61.149933] el0_svc+0x48/0x114
[ 61.153588] el0t_64_sync_handler+0xc0/0xc4
[ 61.158285] el0t_64_sync+0x190/0x194
[ 61.162461]
[ 61.162461] other info that might help us debug this:
[ 61.162461]
[ 61.170452] Possible unsafe locking scenario:
[ 61.170452]
[ 61.176361] CPU0 CPU1
[ 61.180880] ---- ----
[ 61.185400] lock(&spidev->buf_lock);
[ 61.189145] lock(&spidev->spi_lock);
[ 61.195406] lock(&spidev->buf_lock);
[ 61.201667] lock(&spidev->spi_lock);
[ 61.205412]
[ 61.205412] *** DEADLOCK ***
[ 61.205412]
[ 61.211320] 1 lock held by spidev_test_tel/256:
[ 61.215841] #0: ffff0001a123fb30 (&spidev->buf_lock){+.+.}-{3:3}, at: spidev_write+0x44/0x13c
[ 61.224461]
[ 61.224461] stack backtrace:
[ 61.228809] CPU: 3 UID: 0 PID: 256 Comm: spidev_test_tel Not tainted 6.12.23-telechips+ #56
[ 61.228815] Hardware name: Telechips TCC8070 LPD4X322 Main Core (DT)
[ 61.228819] Call trace:
[ 61.228821] dump_backtrace+0x9c/0x11c
[ 61.228829] show_stack+0x18/0x24
[ 61.228836] dump_stack_lvl+0xa4/0xf4
[ 61.228842] dump_stack+0x18/0x24
[ 61.228846] print_circular_bug.isra.0+0x364/0x44c
[ 61.228854] check_noncircular+0x184/0x198
[ 61.228861] __lock_acquire+0x1394/0x1cc8
[ 61.228866] lock_acquire+0x11c/0x324
[ 61.228870] __mutex_lock+0xa4/0xfc0
[ 61.228877] mutex_lock_nested+0x24/0x30
[ 61.228883] spidev_sync+0x2c/0x70
[ 61.228889] spidev_sync_write+0x9c/0xc0
[ 61.228895] spidev_write+0x104/0x13c
[ 61.228901] vfs_write+0xd4/0x5a4
[ 61.228907] ksys_write+0xe0/0xf8
[ 61.228914] __arm64_sys_write+0x1c/0x28
[ 61.228920] invoke_syscall+0x44/0x100
[ 61.228926] el0_svc_common.constprop.0+0x40/0xe0
[ 61.228933] do_el0_svc+0x1c/0x28
[ 61.228939] el0_svc+0x48/0x114
[ 61.228943] el0t_64_sync_handler+0xc0/0xc4
[ 61.228949] el0t_64_sync+0x190/0x194