Re: [PATCH v4 0/2] nfsd: CB_SEQUENCE error handling fixes and cleanups

From: J. Bruce Fields
Date: Fri Feb 07 2025 - 09:12:33 EST


On Fri, Feb 07, 2025 at 08:45:02AM -0500, Jeff Layton wrote:
> I think this rework makes sense, and I've run these against pynfs,
> fstests, and nfstest, but I'm not sure how well that stresses the
> backchannel error handling.

Very little to none at all, unfortunately, I'd guess.

pynfs/nfs4.1/server41tests/st_trunking.py had some ideas for some simple
tests, maybe that'd be one starting point. Or maybe
server41tests/st_delegation.py

--b.

> I'd like to put this into linux-next for now
> and see if any problems arise.
>
> Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> ---
> Changes in v4:
> - Hold back on session refcounting changes for now and just send CB_SEQUENCE
> error handling rework.
> - Link to v3: https://lore.kernel.org/r/20250129-nfsd-6-14-v3-0-506e71e39e6b@xxxxxxxxxx
>
> Changes in v3:
> - rename cb_session_changed to nfsd4_cb_session_changed
> - rename restart_callback to requeue_callback, and rename need_restart:
> label to requeue:
> - don't increment seq_nr on -ESERVERFAULT
> - comment cleanups
> - drop client-side rpc patch (will send separately)
> - Link to v2: https://lore.kernel.org/r/20250129-nfsd-6-14-v2-0-2700c92f3e44@xxxxxxxxxx
>
> Changes in v2:
> - make nfsd4_session be RCU-freed
> - change code to keep reference to session over callback RPCs
> - rework error handling in nfsd4_cb_sequence_done()
> - move NFSv4.0 handling out of nfsd4_cb_sequence_done()
> - Link to v1: https://lore.kernel.org/r/20250123-nfsd-6-14-v1-0-c1137a4fa2ae@xxxxxxxxxx
>
> ---
> Jeff Layton (2):
> nfsd: overhaul CB_SEQUENCE error handling
> nfsd: lift NFSv4.0 handling out of nfsd4_cb_sequence_done()
>
> fs/nfsd/nfs4callback.c | 107 ++++++++++++++++++++++++++++++-------------------
> 1 file changed, 66 insertions(+), 41 deletions(-)
> ---
> base-commit: 50934b1a613cabba2b917879c3e722882b72f628
> change-id: 20250123-nfsd-6-14-b0797e385dc0
>
> Best regards,
> --
> Jeff Layton <jlayton@xxxxxxxxxx>