Re: [PATCH net-next v5 4/6] net: bcmgenet: add XDP_TX support
From: Jakub Kicinski
Date: Wed Apr 01 2026 - 23:33:11 EST
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
net: bcmgenet: add XDP_TX support
This commit implements XDP_TX using ring 16, which is dedicated to XDP TX
for isolation from SKB TX queues. It also changes the page_pool DMA
direction to bidirectional to allow TX reuse of the existing DMA mapping.
> diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> index b45ba2c2857e..687c3b12d44f 100644
> --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
> +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
[ ... ]
> @@ -2299,11 +2328,96 @@ static struct sk_buff *bcmgenet_xdp_build_skb(struct bcmgenet_rx_ring *ring,
> return skb;
> }
>
> +static bool bcmgenet_xdp_xmit_frame(struct bcmgenet_priv *priv,
> + struct xdp_frame *xdpf, bool dma_map)
> +{
[ ... ]
> + } else {
> + struct page *page = virt_to_page(xdpf->data);
> +
> + /* For local XDP_TX the caller already prepended the TSB
> + * into xdpf->data/len, so dma_len == xdpf->len.
> + */
> + dma_len = xdpf->len;
> + mapping = page_pool_get_dma_addr(page) +
> + sizeof(*xdpf) + xdpf->headroom;
When the RX page pool is created in bcmgenet_rx_ring_create_pool(), it is
initialized with .offset = GENET_XDP_HEADROOM. Does the DMA address
calculation here need to include this GENET_XDP_HEADROOM offset? Without it,
it appears the hardware might be given a DMA address that is 256 bytes
before the actual packet data, transmitting uninitialized memory.
[ ... ]
> @@ -2311,14 +2425,42 @@ static unsigned int bcmgenet_run_xdp(struct bcmgenet_rx_ring *ring,
> switch (act) {
> case XDP_PASS:
> return XDP_PASS;
> + case XDP_TX:
> + /* Prepend a zeroed TSB (Transmit Status Block). The GENET
> + * MAC has TBUF_64B_EN set globally, so hardware expects every
> + * TX buffer to begin with a 64-byte struct status_64. Back
> + * up xdp->data into the RSB area (which is no longer needed
> + * after the BPF program ran) and zero it.
> + */
> + if (xdp->data - xdp->data_hard_start <
> + sizeof(struct status_64)) {
> + page_pool_put_full_page(ring->page_pool, rx_page,
> + true);
> + return XDP_DROP;
> + }
> + xdp->data -= sizeof(struct status_64);
> + xdp->data_meta -= sizeof(struct status_64);
> + memset(xdp->data, 0, sizeof(struct status_64));
> +
> + xdpf = xdp_convert_buff_to_frame(xdp);
Does this headroom check account for the additional sizeof(struct xdp_frame)
required by xdp_convert_buff_to_frame()?
If metadata is present, xdp->data_meta could potentially underflow
xdp->data_hard_start. If the remaining headroom minus metasize becomes
negative in xdp_convert_buff_to_frame(), could it undergo integer promotion
to a large unsigned value and bypass the internal safety check? This might
cause the struct xdp_frame to overwrite the newly zeroed status_64 block.
[ ... ]
> @@ -2556,9 +2698,15 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
> {
> struct bcmgenet_rx_ring *ring = container_of(napi,
> struct bcmgenet_rx_ring, napi);
> + struct bcmgenet_priv *priv = ring->priv;
> struct dim_sample dim_sample = {};
> unsigned int work_done;
>
> + /* Reclaim completed XDP TX frames (ring 16 has no interrupt) */
> + if (priv->xdp_prog)
> + bcmgenet_tx_reclaim(priv->dev,
> + &priv->xdp_tx_ring, false);
> +
What happens to in-flight frames if the XDP program is detached at runtime
and priv->xdp_prog becomes NULL? Will this permanently lose track of the
page pool buffers and DMA mappings since the NAPI poll stops reclaiming
them?
Additionally, if the interface is the target of an XDP_REDIRECT from another
device, a local XDP program is not required and priv->xdp_prog could be
NULL. Does this mean the transmitted frames for redirected traffic will
never be reclaimed, eventually filling the TX ring?
--
pw-bot: cr