Re: [PATCH] RDMA/ulp: Add missing deinit() call
From: Zhijian Li (Fujitsu)
Date: Tue Dec 24 2024 - 02:06:57 EST
The subject should be:
RDMA/ulp -> RDMA/rtrs
Additionally, the related reproducer is available at the following link
Thanks
On 23/12/2024 10:57, Li Zhijian wrote:
> A warning is triggered when repeatedly connecting and disconnecting the
> rnbd:
> list_add corruption. prev->next should be next (ffff88800b13e480), but was ffff88801ecd1338. (prev=ffff88801ecd1340).
> WARNING: CPU: 1 PID: 36562 at lib/list_debug.c:32 __list_add_valid_or_report+0x7f/0xa0
> Workqueue: ib_cm cm_work_handler [ib_cm]
> RIP: 0010:__list_add_valid_or_report+0x7f/0xa0
> ? __list_add_valid_or_report+0x7f/0xa0
> ib_register_event_handler+0x65/0x93 [ib_core]
> rtrs_srv_ib_dev_init+0x29/0x30 [rtrs_server]
> rtrs_ib_dev_find_or_add+0x124/0x1d0 [rtrs_core]
> __alloc_path+0x46c/0x680 [rtrs_server]
> ? rtrs_rdma_connect+0xa6/0x2d0 [rtrs_server]
> ? rcu_is_watching+0xd/0x40
> ? __mutex_lock+0x312/0xcf0
> ? get_or_create_srv+0xad/0x310 [rtrs_server]
> ? rtrs_rdma_connect+0xa6/0x2d0 [rtrs_server]
> rtrs_rdma_connect+0x23c/0x2d0 [rtrs_server]
> ? __lock_release+0x1b1/0x2d0
> cma_cm_event_handler+0x4a/0x1a0 [rdma_cm]
> cma_ib_req_handler+0x3a0/0x7e0 [rdma_cm]
> cm_process_work+0x28/0x1a0 [ib_cm]
> ? _raw_spin_unlock_irq+0x2f/0x50
> cm_req_handler+0x618/0xa60 [ib_cm]
> cm_work_handler+0x71/0x520 [ib_cm]
>
> Fix it by invoking the `deinit()` to appropriately unregister the IB event
> handler.
>
> Fixes: 667db86bcbe8 ("RDMA/rtrs: Register ib event handler")
> Signed-off-by: Li Zhijian <lizhijian@xxxxxxxxxxx>
> ---
> drivers/infiniband/ulp/rtrs/rtrs.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
> index 4e17d546d4cc..3b3efecd0817 100644
> --- a/drivers/infiniband/ulp/rtrs/rtrs.c
> +++ b/drivers/infiniband/ulp/rtrs/rtrs.c
> @@ -580,6 +580,9 @@ static void dev_free(struct kref *ref)
> dev = container_of(ref, typeof(*dev), ref);
> pool = dev->pool;
>
> + if (pool->ops && pool->ops->deinit)
> + pool->ops->deinit(dev);
> +
> mutex_lock(&pool->mutex);
> list_del(&dev->entry);
> mutex_unlock(&pool->mutex);