Re: [PATCH net v2 2/2] page_pool: fix IOMMU crash when driver has already unbound

From: Yunsheng Lin
Date: Sat Oct 05 2024 - 08:39:59 EST


On 10/2/2024 3:37 PM, Paolo Abeni wrote:
Hi,

On 10/2/24 04:34, Yunsheng Lin wrote:
On 10/1/2024 9:32 PM, Paolo Abeni wrote:
Is the problem only tied to VFs drivers? It's a pity all the page_pool
users will have to pay a bill for it...

I am afraid it is not only tied to VFs drivers, as:
attempting DMA unmaps after the driver has already unbound may leak
resources or at worst corrupt memory.

Unloading PFs driver might cause the above problems too, I guess the
probability of crashing is low for the PF as PF can not be disable
unless it can be hot-unplug'ed, but the probability of leaking resources
behind the dma mapping might be similar.

Out of sheer ignorance, why/how the refcount acquired by the page pool on the device does not prevent unloading?

I am not sure if I understand the reasoning behind that, but it seems
the driver unloading does not check on the refcount of the device from
the implementation of __device_release_driver().


I fear the performance impact could be very high: AFICS, if the item array become fragmented, insertion will take linar time, with the quite large item_count/pool size. If so, it looks like a no-go.

The last checked index is recorded in pool->item_idx, so the insertion
mostly will not take linear, unless pool->items is almost full and the
old item came back to page_pool is just checked. The thought is that if
it comes to this point, the page_pool is likely not the bottleneck
anymore, and adding infinite pool->items might not make any difference.

If the insertion does turn out to be a bottleneck, 'struct llist_head'
can be used to records the old items lockless for the freeing side, and
llist_del_all() can be used to refill the old items for the allocing
side from freeing side, which is kind of like the pool->ring and
pool->alloc used currently in page_pool. As this patchset is already
complicated, doing this makes it more complicated, I am not sure it is
worth the effort right now as benefit does not seem obvious yet.


I fear we should consider blocking the device removal until all the pages are returned/unmapped ?!? (I hope that could be easier/faster)

As Ilias pointed out, blocking the device removal until all the pages
are returned/unmapped might cause infinite delay in our testing:

https://lore.kernel.org/netdev/d50ac1a9-f1e2-49ee-b89b-05dac9bc6ee1@xxxxxxxxxx/


/P