Re: [PATCH net-next 3/3] net: ethernet: ti: cpsw: add XDP support

From: Ivan Khoronzhuk
Date: Mon May 27 2019 - 14:14:07 EST


On Fri, May 24, 2019 at 01:54:18PM +0200, Jesper Dangaard Brouer wrote:
On Thu, 23 May 2019 21:20:35 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx> wrote:

Add XDP support based on rx page_pool allocator, one frame per page.
Page pool allocator is used with assumption that only one rx_handler
is running simultaneously. DMA map/unmap is reused from page pool
despite there is no need to map whole page.

When using page_pool for DMA-mapping, your XDP-memory model must use
1-page per packet, which you state you do. This is because
__page_pool_put_page() fallback mode does a __page_pool_clean_page()
unmapping the DMA. Ilias and I are looking at options for removing this
restriction as Mlx5 would need it (when we extend the SKB to return
pages to page_pool).
Thank for what you do, it can simplify a lot...


Unfortunately, I've found another blocker for drivers using the DMA
mapping feature of page_pool. We don't properly handle the case, where
a remote TX-driver have xdp_frame's in-flight, and simultaneously the
sending driver is unloaded and take down the page_pool. Nothing crash,
but we end-up calling put_page() on a page that is still DMA-mapped.

Seems so, ... for generic solution, but looks like in case of cpsw there
is no issue due to "like direct" dma map by adding offset, so whether page_pool
dma map or dma map/unmap per rx/xmit, shouldn't be big difference. Not sure
about all SoCs thought...

Despite of it, for cpsw I keep page_pool while down/up that I'm going to change
in v2.


I'm working on different solutions for fixing this, see here:
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool03_shutdown_inflight.org
Hope there will be no changes in page_pool API.

-- Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

--
Regards,
Ivan Khoronzhuk