Re: [net-next,2/4] octeontx2-af: npa: cn20k: Add DPC support

From: Simon Horman

Date: Wed Mar 18 2026 - 11:58:31 EST


This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.

For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
octeontx2-af: npa: cn20k: Add DPC support

CN20k introduces 32 diagnostic and performance counters that are shared
across all NPA LFs. Counters being shared, each PF driver need to
request for a counter with the required configuration to the AF, so that
a counter can be allocated and mapped to the respective LF with the
requested configuration. Add new mbox messages, npa_dpc_alloc/free to
handle this. Also ensure all the LF to DPC counter mappings are cleared
at the time of LF free/teardown.

> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> @@ -213,6 +213,10 @@
> M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\
> M(NPA_CN20K_AQ_ENQ, 0x404, npa_cn20k_aq_enq, npa_cn20k_aq_enq_req, \
> npa_cn20k_aq_enq_rsp) \
> +M(NPA_CN20K_DPC_ALLOC, 0x405, npa_cn20k_dpc_alloc, npa_cn20k_dpc_alloc_req, \
> + npa_cn20k_dpc_alloc_rsp) \
> +M(NPA_CN20K_DPC_FREE, 0x406, npa_cn20k_dpc_free, npa_cn20k_dpc_free_req, \
> + msg_rsp) \

[ ... ]

> +struct npa_cn20k_dpc_alloc_req {
> + struct mbox_msghdr hdr;
> + u16 dpc_conf;
> +};
> +
> +struct npa_cn20k_dpc_alloc_rsp {
> + struct mbox_msghdr hdr;
> + u8 cntr_id;
> +};
> +
> +struct npa_cn20k_dpc_free_req {
> + struct mbox_msghdr hdr;
> + u8 cntr_id;
> +};

[ ... ]

> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c
> --- a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c

[ ... ]

> +int npa_cn20k_dpc_free(struct rvu *rvu, struct npa_cn20k_dpc_free_req *req)
> +{
> + struct rvu_hwinfo *hw = rvu->hw;
> + u16 pcifunc = req->hdr.pcifunc;
> + int cntr, lf, blkaddr, ridx;
> + struct rvu_block *block;
> + struct rvu_pfvf *pfvf;
> + u64 val, lfmask;
> +
> + pfvf = rvu_get_pfvf(rvu, pcifunc);
> +
> + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPA, 0);
> + if (!pfvf->npalf || blkaddr < 0)
> + return NPA_AF_ERR_AF_LF_INVALID;
> +
> + block = &hw->block[blkaddr];
> + lf = rvu_get_lf(rvu, block, pcifunc, 0);
> + if (lf < 0)
> + return NPA_AF_ERR_AF_LF_INVALID;
> +
> + /* 0 to 63 lfs -> idx 0, 64 - 127 lfs -> idx 1 */
> + ridx = lf >> 6;
> + lfmask = BIT_ULL(ridx ? lf - NPA_DPC_LFS_PER_REG : lf);
> + cntr = req->cntr_id;
> +
> + ridx = 2 * cntr + ridx;
^^^^

Does this code need bounds checking on req->cntr_id?

The cntr_id field is u8 (range 0-255) and comes from the mbox request
without validation. Looking at npa_cn20k_dpc_alloc():

cntr = rvu_alloc_rsrc(&rvu->npa_dpc);
if (cntr < 0)
return cntr;
rsp->cntr_id = cntr;

The alloc path returns values bounded by NPA_DPC_MAX (32) through
rvu_alloc_rsrc(). However, the free path directly uses req->cntr_id:

cntr = req->cntr_id;

ridx = 2 * cntr + ridx;

When cntr_id >= 64, this causes rvu_free_rsrc() to call
__clear_bit(cntr_id, bmap) where bmap was allocated for only 32 bits:

rvu_npa_init():
rvu->npa_dpc.max = NPA_DPC_MAX;
err = rvu_alloc_bitmap(&rvu->npa_dpc);

rvu_alloc_bitmap() allocates BITS_TO_LONGS(32) = 1 long = 64 bits on
64-bit systems. For cntr_id values >= 64, __clear_bit() accesses
bmap[cntr_id/64] which is out of bounds, causing a heap out-of-bounds
write.

The alloc path is protected because find_first_zero_bit() is bounded by
rsrc->max, but rvu_free_rsrc() has no bounds check on the id parameter.

Should npa_cn20k_dpc_free() check cntr_id against NPA_DPC_MAX before
using it?

> +
> + val = rvu_read64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx));
> + /* Check if the counter is allotted to this LF */
> + if (!(val & lfmask))
> + return 0;
> +
> + /* Revert permission */
> + val &= ~lfmask;
> + rvu_write64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx), val);
> +
> + /* Free this counter */
> + rvu_free_rsrc(&rvu->npa_dpc, req->cntr_id);
> +
> + return 0;
> +}

[ ... ]