Re: [PATCH net-next 1/2] gve: Update QPL page registration logic

From: Paolo Abeni

Date: Wed Feb 11 2026 - 04:34:26 EST


On 2/7/26 2:17 AM, Joshua Washington wrote:
> +void gve_update_num_qpl_pages(struct gve_priv *priv,
> + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg,
> + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg)
> +{
> + u64 ideal_tx_pages, ideal_rx_pages;
> + u16 tx_num_queues, rx_num_queues;
> + u64 max_pages, tx_pages;
> +
> + if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
> + rx_alloc_cfg->pages_per_qpl = rx_alloc_cfg->ring_size;
> + } else if (priv->queue_format == GVE_DQO_QPL_FORMAT) {
> + /*
> + * We want 2 pages per RX descriptor and half a page per TX
> + * descriptor, which means the fraction ideal_tx_pages /
> + * (ideal_tx_pages + ideal_rx_pages) of the pages we allocate
> + * should be for TX. Shrink proportionally as necessary to avoid
> + * allocating more than max_registered_pages total pages.
> + */
> + tx_num_queues = tx_alloc_cfg->qcfg->num_queues;
> + rx_num_queues = rx_alloc_cfg->qcfg_rx->num_queues;
> +
> + ideal_tx_pages = tx_alloc_cfg->ring_size * tx_num_queues / 2;
> + ideal_rx_pages = rx_alloc_cfg->ring_size * rx_num_queues * 2;
> + max_pages = min(priv->max_registered_pages,
> + ideal_tx_pages + ideal_rx_pages);
> +
> + tx_pages = (max_pages * ideal_tx_pages) /
> + (ideal_tx_pages + ideal_rx_pages);
> + tx_alloc_cfg->pages_per_qpl = tx_pages / tx_num_queues;
> + rx_alloc_cfg->pages_per_qpl = (max_pages - tx_pages) /
> + rx_num_queues;

Does not build on 32 bits systems:

RROR: modpost: "__udivdi3" [drivers/net/ethernet/google/gve/gve.ko]
undefined!
make[3]: *** [../scripts/Makefile.modpost:147: Module.symvers] Error 1
make[2]: *** [/srv/nipa-poller/net-next/wt-1/Makefile:2005: modpost] Error 2
make[1]: *** [/srv/nipa-poller/net-next/wt-1/Makefile:248: __sub-make]
Error 2
make: *** [Makefile:248: __sub-make] Error 2

AFAICS because above 'rx_num_queues' is implicitly promoted to u64, and
the statement yield a 64 bits division. You need to use a div64* variant
here (and possibly somewhere else, too).

/P