Re: [PATCH net-next v7 2/8] page_pool: fix timing for checking and disabling napi_local

From: Toke Høiland-Jørgensen
Date: Fri Jan 10 2025 - 10:40:40 EST


Yunsheng Lin <linyunsheng@xxxxxxxxxx> writes:

> page_pool page may be freed from skb_defer_free_flush() in
> softirq context without binding to any specific napi, it
> may cause use-after-free problem due to the below time window,
> as below, CPU1 may still access napi->list_owner after CPU0
> free the napi memory:
>
> CPU 0 CPU1
> page_pool_destroy() skb_defer_free_flush()
> . .
> . napi = READ_ONCE(pool->p.napi);
> . .
> page_pool_disable_direct_recycling() .
> driver free napi memory .
> . .
> . napi && READ_ONCE(napi->list_owner) == cpuid
> . .

Have you actually observed this happen, or are you just speculating?
Because I don't think it can; deleting a NAPI instance already requires
observing an RCU grace period, cf netdevice.h:

/**
* __netif_napi_del - remove a NAPI context
* @napi: NAPI context
*
* Warning: caller must observe RCU grace period before freeing memory
* containing @napi. Drivers might want to call this helper to combine
* all the needed RCU grace periods into a single one.
*/
void __netif_napi_del(struct napi_struct *napi);

/**
* netif_napi_del - remove a NAPI context
* @napi: NAPI context
*
* netif_napi_del() removes a NAPI context from the network device NAPI list
*/
static inline void netif_napi_del(struct napi_struct *napi)
{
__netif_napi_del(napi);
synchronize_net();
}


> Use rcu mechanism to avoid the above problem.
>
> Note, the above was found during code reviewing on how to fix
> the problem in [1].
>
> As the following IOMMU fix patch depends on synchronize_rcu()
> added in this patch and the time window is so small that it
> doesn't seem to be an urgent fix, so target the net-next as
> the IOMMU fix patch does.
>
> 1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@xxxxxxxxxx/T/
>
> Fixes: dd64b232deb8 ("page_pool: unlink from napi during destroy")
> Signed-off-by: Yunsheng Lin <linyunsheng@xxxxxxxxxx>
> CC: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
> Reviewed-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>
> ---
> net/core/page_pool.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 9733206d6406..1aa7b93bdcc8 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -799,6 +799,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
> static bool page_pool_napi_local(const struct page_pool *pool)
> {
> const struct napi_struct *napi;
> + bool napi_local;
> u32 cpuid;
>
> if (unlikely(!in_softirq()))
> @@ -814,9 +815,15 @@ static bool page_pool_napi_local(const struct page_pool *pool)
> if (READ_ONCE(pool->cpuid) == cpuid)
> return true;
>
> + /* Synchronizated with page_pool_destory() to avoid use-after-free
> + * for 'napi'.
> + */
> + rcu_read_lock();
> napi = READ_ONCE(pool->p.napi);
> + napi_local = napi && READ_ONCE(napi->list_owner) == cpuid;
> + rcu_read_unlock();

This rcu_read_lock/unlock() pair is redundant in the context you mention
above, since skb_defer_free_flush() is only ever called from softirq
context (within local_bh_disable()), which already function as an RCU
read lock.

> - return napi && READ_ONCE(napi->list_owner) == cpuid;
> + return napi_local;
> }
>
> void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
> @@ -1165,6 +1172,12 @@ void page_pool_destroy(struct page_pool *pool)
> if (!page_pool_release(pool))
> return;
>
> + /* Paired with rcu lock in page_pool_napi_local() to enable clearing
> + * of pool->p.napi in page_pool_disable_direct_recycling() is seen
> + * before returning to driver to free the napi instance.
> + */
> + synchronize_rcu();

Most drivers call page_pool_destroy() in a loop for each RX queue, so
now you're introducing a full synchronize_rcu() wait for each queue.
That can delay tearing down the device significantly, so I don't think
this is a good idea.

-Toke