On 2025-01-24 20:54:42 [-0500], Waiman Long wrote:
The following bug report happened in a PREEMPT_RT kernel.So how does get_random_bytes() not take any locks? It takes locks in my
[ 30.957705] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
[ 30.957711] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2012, name: kwatchdog
[ 30.962673] preempt_count: 1, expected: 0
[ 30.962676] RCU nest depth: 0, expected: 0
[ 30.962680] 3 locks held by kwatchdog/2012:
[ 30.962684] #0: ffffffff8af2da60 (clocksource_mutex){+.+.}-{3:3}, at: clocksource_watchdog_kthread+0x13/0x50
[ 30.967703] #1: ffffffff8aa8d4d0 (cpu_hotplug_lock){++++}-{0:0}, at: clocksource_verify_percpu.part.0+0x5c/0x330
[ 30.972774] #2: ffff9fe02f5f33e0 ((batched_entropy_u32.lock)){+.+.}-{2:2}, at: get_random_u32+0x4f/0x110
[ 30.977827] Preemption disabled at:
[ 30.977830] [<ffffffff88c1fe56>] clocksource_verify_percpu.part.0+0x66/0x330
[ 30.982837] CPU: 33 PID: 2012 Comm: kwatchdog Not tainted 5.14.0-503.23.1.el9_5.x86_64+rt-debug #1
[ 30.982843] Hardware name: HPE ProLiant DL385 Gen10 Plus/ProLiant DL385 Gen10 Plus, BIOS A42 04/29/2021
[ 30.982846] Call Trace:
[ 30.982850] <TASK>
[ 30.983821] dump_stack_lvl+0x57/0x81
[ 30.983821] __might_resched.cold+0xf4/0x12f
[ 30.983824] rt_spin_lock+0x4c/0x100
[ 30.988833] get_random_u32+0x4f/0x110
[ 30.988833] clocksource_verify_choose_cpus+0xab/0x1a0
[ 30.988833] clocksource_verify_percpu.part.0+0x6b/0x330
[ 30.993894] __clocksource_watchdog_kthread+0x193/0x1a0
[ 30.993898] clocksource_watchdog_kthread+0x18/0x50
[ 30.993898] kthread+0x114/0x140
[ 30.993898] ret_from_fork+0x2c/0x50
[ 31.002864] </TASK>
It is due to the fact that get_random_u32() is called in
clocksource_verify_choose_cpus() with preemption disabled.
If crng_ready() is true by the time get_random_u32() is called, The
batched_entropy_32 local lock will be acquired. In PREEMPT_RT kernel,
it is a rtmutex and we can't acquire it with preemption disabled.
Fix this problem by using the less random get_random_bytes() function
which will not take any lock. In fact, it has the same random-ness as
get_random_u32_below() when crng_ready() is false.
tree. You two have a lock less tree?
In case your tree is not lock less yet, couldn't we perform the loop
verify_n_cpus+1 times without disabled preemption? Then disable
preemption after return from clocksource_verify_choose_cpus() and then
either remove current CPU from the list if it is or remove a random one
so that we get back to verify_n_cpus CPUs set.
Alternatively, (and this might be easier) use migrate_disable() instead
of preempt_disable() and only use preempt_disable() within the
for_each_cpu() loop if delta is important (which I assume it is).
But all this would avoid having to run with disabled preemption within
clocksource_verify_choose_cpus() while having the guarantees you need.