Re: [RFC PATCH] bpf: cpumap: report queue_index to xdp_rxq_info

From: Alexei Starovoitov

Date: Sat Apr 11 2026 - 14:19:11 EST


On Sat, Apr 11, 2026 at 10:51 AM Jose A. Perez de Azpillaga
<azpijr@xxxxxxxxx> wrote:
>
> When a packet is redirected to a CPU map entry,
> cpu_map_bpf_prog_run_xdp() reconstructs a minimal xdp_rxq_info from
> xdp_frame fields (dev_rx and mem_type) before re-running the BPF program
> on the target CPU. However, queue_index was never preserved across the
> CPU boundary, so BPF programs running in cpumap context always observe
> ctx->rx_queue_index == 0, regardless of which hardware queue originally
> received the packet.
>
> Fix this by storing the originating queue_index in struct xdp_frame,
> following the same pattern already established for dev_rx and mem_type.
> The field is populated from rxq->queue_index in
> xdp_convert_buff_to_frame() during NAPI context, when the rxq_info is
> still valid, and restored into the reconstructed rxq_info in
> cpu_map_bpf_prog_run_xdp().
>
> Also use xdpf->queue_index in __xdp_build_skb_from_frame() to call
> skb_record_rx_queue(), which was previously listed as missing
> information in that function's comment.
>
> Also propagate queue_index in dpaa_a050385_wa_xdpf(), which manually
> constructs a new xdp_frame from an uninitialized page. Without this,
> queue_index would contain stale data from the page allocator.
>
> Signed-off-by: Jose A. Perez de Azpillaga <azpijr@xxxxxxxxx>
> ---
> Note: this patch was only compiled, not tested.
>
> drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 1 +
> include/net/xdp.h | 4 +++-
> kernel/bpf/cpumap.c | 2 +-
> net/core/xdp.c | 2 +-
> 4 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> index 3edc8d142dd5..00e36b0ac74d 100644
> --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> @@ -2281,6 +2281,7 @@ static int dpaa_a050385_wa_xdpf(struct dpaa_priv *priv,
> new_xdpf->headroom = priv->tx_headroom;
> new_xdpf->frame_sz = DPAA_BP_RAW_SIZE;
> new_xdpf->mem_type = MEM_TYPE_PAGE_ORDER0;
> + new_xdpf->queue_index = xdpf->queue_index;
>
> /* Release the initial buffer */
> xdp_return_frame_rx_napi(xdpf);
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index aa742f413c35..6db10e6a8864 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -297,10 +297,11 @@ struct xdp_frame {
> u32 headroom;
> u32 metasize; /* uses lower 8-bits */
> /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time,
> - * while mem_type is valid on remote CPU.
> + * while mem_type and queue_index are valid on remote CPU.
> */
> enum xdp_mem_type mem_type:32;
> struct net_device *dev_rx; /* used by cpumap */
> + u32 queue_index; /* used by cpumap */
> u32 frame_sz;
> u32 flags; /* supported values defined in xdp_buff_flags */
> };
> @@ -441,6 +442,7 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
>
> /* rxq only valid until napi_schedule ends, convert to xdp_mem_type */
> xdp_frame->mem_type = xdp->rxq->mem.type;
> + xdp_frame->queue_index = xdp->rxq->queue_index;
>
> return xdp_frame;
> }
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index 5e59ab896f05..448da572de9a 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -197,7 +197,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>
> rxq.dev = xdpf->dev_rx;
> rxq.mem.type = xdpf->mem_type;
> - /* TODO: report queue_index to xdp_rxq_info */
> + rxq.queue_index = xdpf->queue_index;

This is like 5th time people attempt to address this TODO.

Just remove that comment. Don't send broken patches.

pw-bot: cr