Re: [PATCH net-next 4/6] net: bcmgenet: add XDP_TX support

From: Nicolai Buchwitz

Date: Fri Mar 13 2026 - 08:46:01 EST


On 13.3.2026 12:37, Subbaraya Sundeep wrote:
Hi,

Hi Sundeep


On 2026-03-13 at 14:50:59, Nicolai Buchwitz (nb@xxxxxxxxxxx) wrote:
Implement XDP_TX by submitting XDP frames through the default TX ring
(DESC_INDEX). The frame is DMA-mapped and placed into a single TX
descriptor with SOP|EOP|APPEND_CRC flags.

The xdp_frame pointer is stored in the TX control block so that
bcmgenet_free_tx_cb() can call xdp_return_frame() on TX completion,
returning the page to the originating page_pool.

The page_pool DMA direction is changed from DMA_FROM_DEVICE to
DMA_BIDIRECTIONAL to support the TX DMA mapping of received pages.

Signed-off-by: Nicolai Buchwitz <nb@xxxxxxxxxxx>
---
.../net/ethernet/broadcom/genet/bcmgenet.c | 73 ++++++++++++++++++-
.../net/ethernet/broadcom/genet/bcmgenet.h | 1 +
2 files changed, 71 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index d43729fc2b1b..373ba5878ca1 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -1893,6 +1893,12 @@ static struct sk_buff *bcmgenet_free_tx_cb(struct device *dev,
if (cb == GENET_CB(skb)->last_cb)
return skb;

+ } else if (cb->xdpf) {
+ dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr),
+ dma_unmap_len(cb, dma_len), DMA_TO_DEVICE);
+ dma_unmap_addr_set(cb, dma_addr, 0);
+ xdp_return_frame(cb->xdpf);
+ cb->xdpf = NULL;
} else if (dma_unmap_addr(cb, dma_addr)) {
dma_unmap_page(dev,
dma_unmap_addr(cb, dma_addr),
@@ -2299,10 +2305,62 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
return skb;
}

+static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
+ struct xdp_frame *xdpf)
+{
+ struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX];
+ struct device *kdev = &priv->pdev->dev;
+ struct enet_cb *tx_cb_ptr;
+ dma_addr_t mapping;
+ u32 len_stat;
+
+ spin_lock(&ring->lock);
+
+ if (ring->free_bds < 1) {
+ spin_unlock(&ring->lock);
+ return false;
+ }
+
+ tx_cb_ptr = bcmgenet_get_txcb(priv, ring);
+
+ mapping = dma_map_single(kdev, xdpf->data, xdpf->len, DMA_TO_DEVICE);

AFAIU you are transmitting the frame received on a RQ which is from the page pool
and already dma mapped. Do you have to do dma_map again?

Thanks,
Sundeep


You're right. Since the page_pool is configured with DMA_BIDIRECTIONAL,
the pages are already mapped and we can reuse the existing mapping for
XDP_TX frames. The initial implementation took the simple route of
mapping everything uniformly, but that's unnecessary overhead for the
local XDP_TX case.

In v2 I'll add a bool dma_map parameter to bcmgenet_xdp_xmit_frame()
(following the mvneta/stmmac pattern): XDP_TX will reuse the page_pool
mapping via page_pool_get_dma_addr() + dma_sync_single_for_device(),
while ndo_xdp_xmit will keep dma_map_single() for foreign frames. The
cleanup path will be split accordingly.

Regards
Nicolai

[...]