Re: [PATCH v1] NFS: Fix possible NULL pointer dereference in nfs_inode_remove_request()
From: Trond Myklebust
Date: Mon Oct 13 2025 - 00:48:22 EST
On Sun, 2025-10-12 at 16:39 +0800, Baolin Liu wrote:
> [You don't often get email from liubaolin12138@xxxxxxx. Learn why
> this is important at https://aka.ms/LearnAboutSenderIdentification ;]
>
> From: Baolin Liu <liubaolin@xxxxxxxxxx>
>
> nfs_page_to_folio(req->wb_head) may return NULL in certain
> conditions,
> but the function dereferences folio->mapping and calls
> folio_end_dropbehind(folio) unconditionally. This may cause a NULL
> pointer dereference crash.
>
> Fix this by checking folio before using it or calling
> folio_end_dropbehind().
>
> Signed-off-by: Baolin Liu <liubaolin@xxxxxxxxxx>
> ---
> fs/nfs/write.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> index 0fb6905736d5..e148308c1923 100644
> --- a/fs/nfs/write.c
> +++ b/fs/nfs/write.c
> @@ -739,17 +739,18 @@ static void nfs_inode_remove_request(struct
> nfs_page *req)
> nfs_page_group_lock(req);
> if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) {
> struct folio *folio = nfs_page_to_folio(req-
> >wb_head);
> - struct address_space *mapping = folio->mapping;
>
> - spin_lock(&mapping->i_private_lock);
> if (likely(folio)) {
> + struct address_space *mapping = folio-
> >mapping;
> +
> + spin_lock(&mapping->i_private_lock);
> folio->private = NULL;
> folio_clear_private(folio);
> clear_bit(PG_MAPPED, &req->wb_head-
> >wb_flags);
> - }
> - spin_unlock(&mapping->i_private_lock);
> + spin_unlock(&mapping->i_private_lock);
>
> - folio_end_dropbehind(folio);
> + folio_end_dropbehind(folio);
> + }
> }
> nfs_page_group_unlock(req);
>
> --
> 2.39.2
>
What reason is there to believe that we can ever call
nfs_inode_remove_request() with a NULL value for req->wb_head-
>wb_folio, or even with a NULL value for req->wb_head->wb_folio-
>mapping?
--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trondmy@xxxxxxxxxx, trond.myklebust@xxxxxxxxxxxxxxx