Re: [PATCH net-next 1/2] net: macb: implement ethtool_ops.get|set_channels()

From: Théo Lebrun

Date: Mon Mar 09 2026 - 13:11:46 EST


On Sat Mar 7, 2026 at 4:09 AM CET, Jakub Kicinski wrote:
> On Thu, 05 Mar 2026 18:20:14 +0100 Théo Lebrun wrote:
>> + if (!(bp->caps & MACB_CAPS_QUEUE_DISABLE))
>> + return -EOPNOTSUPP;
>
> Why not set max to 1 in this case?

With !QUEUE_DISABLE, we only know how to run with all queues enabled.
It doesn't imply that max_num_queues == 1.

MACB_CAPS_QUEUE_DISABLE means that the field QUEUE_DISABLE (BIT0) in the
per-queue register RBQP disables queue Rx. If we don't have that
capability we can have multiple queues (if HW supports it) but we must
always run with all enabled.

The correct way to deal with `!(bp->caps & MACB_CAPS_QUEUE_DISABLE)`
would be something like:

static void macb_get_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct macb *bp = netdev_priv(netdev);

ch->max_combined = bp->max_num_queues;
ch->combined_count = bp->num_queues;

if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
/* we know how disable individual queues */
ch->min_combined = 1;
else
/* we only support running with all queues active */
ch->min_combined = bp->max_num_queues;
}

But ch->min_combined does not exist.

>
>> + if (!count || ch->rx_count || ch->tx_count)
>> + return -EINVAL;
>
> Core should check this for you already
>
>> + if (count > bp->max_num_queues)
>> + return -EINVAL;
>
> and this

Noted thanks!

>> + if (count == old_count)
>> + return 0;
>> +
>> + if (running)
>> + macb_close(bp->dev);
>> +
>> + bp->num_queues = count;
>> + netif_set_real_num_queues(bp->dev, count, count);
>> +
>> + if (running) {
>> + ret = macb_open(bp->dev);
>> + if (ret) {
>> + bp->num_queues = old_count;
>> + netif_set_real_num_queues(bp->dev, old_count, old_count);
>> + macb_open(bp->dev);
>
> both macb_open() calls may fail under memory pressure
> For new functionality we ask drivers to allocate all necessary
> resources upfront then just swap them in and reconfigure HW

The main reason we want to set queue count is memory savings. If we take
the Mobileye EyeQ5 SoC, it has a small 32MiB RAM alias usable for DMA.
If we waste it on networking we have less available for the remaining
peripherals. Is there some way we could avoid upfront allocations?

.set_ringparam() can help to avoid memory waste by using many small
queues. But our main target config is a single large queue (common with
AF_XDP zero-copy when userspace wants a single socket). In that case we
waste `(max_num_queues - 1) / max_num_queues` percent: 75% with
max_num_queues=4 on Mobileye EyeQ5 & EyeQ6H, ie Mobileye boards on my
desk at the moment.

I wonder if we'll see GEM IPs that have >4 queues. HW manual indicates
up to 16 is supported.

Thanks Jakub,

--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com