NFSv4: rare bug *and* root cause captured in the wild

From: John Hubbard
Date: Fri Aug 02 2019 - 21:28:27 EST


Hi,

While testing unrelated (put_user_pages) work on Linux 5.3-rc2+,
I rebooted the NFS *server*, tried to ssh to the client, and the
client dumped a backtrace as shown below.

Good news: I found that I can reliably reproduce it with those steps,
at commit 1e78030e5e5b (in linux.git) plus my 34-patch series [1], which
off course is completely unrelated. :) Anyway, I'm making a note of the
exact repro commit, so I don't lose the repro.

I see what's wrong, but I do *not* see an easy fix. Can one of the
designers please recommend an approach to fixing this?

This is almost certainly caused by commit 7e0a0e38fcfe ("SUNRPC:
Replace the queue timer with a delayed work function"), which changed
over to running things in process (kthread) context. The commit is dated
May 1, 2019, but I've only been running NFSv4 for a couple days, so
the problem has likely been there all along, not specific to 5.3.

The call stack starts off in atomic context, so we get the bug:

nfs4_do_reclaim
rcu_read_lock /* we are now in_atomic() and must not sleep */
nfs4_purge_state_owners
nfs4_free_state_owner
nfs4_destroy_seqid_counter
rpc_destroy_wait_queue
cancel_delayed_work_sync
__cancel_work_timer
__flush_work
start_flush_work
might_sleep:
(kernel/workqueue.c:2975: BUG)

Details: actual backtrace I am seeing:

BUG: sleeping function called from invalid context at kernel/workqueue.c:2975
in_atomic(): 1, irqs_disabled(): 0, pid: 2224, name: 10.110.48.28-ma
1 lock held by 10.110.48.28-ma/2224:
#0: 00000000d338d2ec (rcu_read_lock){....}, at: nfs4_do_reclaim+0x22/0x6b0 [nfsv4]
CPU: 8 PID: 2224 Comm: 10.110.48.28-ma Not tainted 5.3.0-rc2-hubbard-github+ #52
Hardware name: ASUS X299-A/PRIME X299-A, BIOS 1704 02/14/2019
Call Trace:
dump_stack+0x46/0x60
___might_sleep.cold+0x8e/0x9b
__flush_work+0x61/0x370
? find_held_lock+0x2b/0x80
? add_timer+0x100/0x200
? _raw_spin_lock_irqsave+0x35/0x40
__cancel_work_timer+0xfb/0x180
? nfs4_purge_state_owners+0xf4/0x170 [nfsv4]
nfs4_free_state_owner+0x10/0x50 [nfsv4]
nfs4_purge_state_owners+0x139/0x170 [nfsv4]
nfs4_do_reclaim+0x7a/0x6b0 [nfsv4]
? pnfs_destroy_layouts_byclid+0xc4/0x100 [nfsv4]
nfs4_state_manager+0x6be/0x7f0 [nfsv4]
nfs4_run_state_manager+0x1b/0x40 [nfsv4]
kthread+0xfb/0x130
? nfs4_state_manager+0x7f0/0x7f0 [nfsv4]
? kthread_bind+0x20/0x20
ret_from_fork+0x35/0x40

And last but not least, some words of encouragement: the reason I moved
from NFSv3 to NFSv4 is that the easy authentication (matching UIDs on
client and server) now works perfectly. Yea! So I'm enjoying v4, despite
the occasional minor glitch. :)

[1] https://lore.kernel.org/r/20190802022005.5117-1-jhubbard@xxxxxxxxxx

thanks,
--
John Hubbard
NVIDIA