Re: [PATCH v4 net-next 12/14] net: dsa: netc: add more basic functions support

From: Jakub Kicinski

Date: Thu Apr 02 2026 - 21:19:36 EST


This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
net: dsa: netc: add more basic functions support

This patch expands the NETC switch driver with several foundational
features, including FDB and MDB management, STP state handling, MTU
configuration, port setup/teardown, and host flooding support.

At this stage, the driver operates only in standalone port mode. Each
port uses VLAN 0 as its PVID, meaning ingress frames are internally
assigned VID 0 regardless of whether they arrive tagged or untagged.
Note that this does not inject a VLAN 0 header into the frame, the VID
is used purely for subsequent VLAN processing within the switch.

> diff --git a/drivers/net/dsa/netc/netc_main.c b/drivers/net/dsa/netc/netc_main.c
> --- a/drivers/net/dsa/netc/netc_main.c
> +++ b/drivers/net/dsa/netc/netc_main.c

[ ... ]

> +static int netc_add_standalone_vlan_entry(struct netc_switch *priv)
> +{
> + u32 bitmap_stg = VFT_STG_ID(0) | netc_available_port_bitmap(priv);

Does putting all standalone ports into a single shared VLAN 0 break the DSA
isolation requirements?

If a user manually adds a static FDB entry to a standalone port, it is
programmed into VLAN 0. Because all standalone ports reside in this VLAN, a
frame arriving on one standalone port with a matching destination MAC might
be hardware-switched directly to another standalone port, bypassing the CPU.

[ ... ]

> +static int netc_port_set_fdb_entry(struct netc_port *np,
> + const unsigned char *addr, u16 vid)
> +{

[ ... ]

> + /* If the entry already exists, but not on this port, we need to
> + * update the port bitmap. In general, it should only be valid
> + * for multicast or broadcast address.
> + */
> + port_bitmap ^= BIT(port);
> + entry->cfge.port_bitmap = cpu_to_le32(port_bitmap);

Can this XOR operation cause multi-port floods during unicast MAC migrations?

When a unicast MAC address moves to a new port, the bridging layer issues an
ADD for the new port. The XOR operation leaves the old port bit set alongside
the new port bit. Does the driver need to differentiate between unicast and
multicast addresses and explicitly overwrite the bitmap for unicast entries?

[ ... ]

> +static int netc_port_del_fdb_entry(struct netc_port *np,
> + const unsigned char *addr, u16 vid)
> +{
> + struct netc_switch *priv = np->switch_priv;
> + struct ntmp_user *ntmp = &priv->ntmp;
> + struct netc_fdb_entry *entry;
> + int port = np->dp->index;
> + u32 port_bitmap;
> + int err = 0;
> +
> + mutex_lock(&priv->fdbt_lock);
> +
> + entry = netc_lookup_fdb_entry(priv, addr, vid);
> + if (unlikely(!entry))
> + goto unlock_fdbt;

Will this prevent dynamically learned FDB entries from being deleted?

Since the CPU port is configured with MLO_HW, the switch hardware dynamically
learns MAC addresses, but they are not added to the software priv->fdb_list.
If a user attempts to delete a dynamic entry, the software lookup will fail
here and return 0 (success), leaving the stale dynamically learned entry
active in hardware.

[ ... ]

> +static int netc_port_change_mtu(struct dsa_switch *ds,
> + int port, int mtu)
> +{
> + u32 max_frame_size = mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
> + struct netc_port *np = NETC_PORT(ds, port);
> +
> + if (dsa_is_cpu_port(ds, port))
> + max_frame_size += NETC_TAG_MAX_LEN;

Does this double-count the DSA tag overhead for the CPU port?

The DSA core computes the CPU port's MTU by adding the tagger overhead before
calling the switch's .port_change_mtu() function. If the maximum MTU of 9582
bytes is requested, the CPU port receives an MTU of 9596. Adding
NETC_TAG_MAX_LEN again here results in 9628 bytes, which exceeds the
hardware limit of 9600 (NETC_MAX_FRAME_LEN) and might lead to truncation.

[ ... ]

> +static int netc_port_fdb_add(struct dsa_switch *ds, int port,
> + const unsigned char *addr, u16 vid,
> + struct dsa_db db)
> +{
> + struct netc_port *np = NETC_PORT(ds, port);
> +
> + /* Currently, we only support standalone port mode, so all VLANs
> + * should be converted to NETC_STANDALONE_PVID.
> + */
> + return netc_port_set_fdb_entry(np, addr, NETC_STANDALONE_PVID);
> +}

Is it safe to silently override user-requested VLAN IDs to VLAN 0?

If a user explicitly attempts to configure a MAC address for a specific VLAN,
the hardware is programmed for VLAN 0 and returns success. Should this
instead check if the requested vid is not 0 and not NETC_STANDALONE_PVID,
and return -EOPNOTSUPP to inform the networking stack?

[ ... ]

> +static void netc_port_set_host_flood(struct dsa_switch *ds, int port,
> + bool uc, bool mc)
> +{
> + struct netc_port *np = NETC_PORT(ds, port);
> +
> + if (np->uc == uc && np->mc == mc)
> + return;
> +
> + /* IPFT does not support in-place updates to the KEYE element,
> + * so we need to delete the old IPFT entry and then add a new
> + * one.
> + */
> + if (np->host_flood)
> + netc_port_remove_host_flood(np);
> +
> + if (netc_port_add_host_flood_rule(np, uc, mc))
> + dev_err(ds->dev, "Failed to add host flood rule on port %d\n",
> + port);
> +}

Could this permanently lose the host flood configuration if the new rule
addition fails?

If the call to netc_port_add_host_flood_rule() fails (for example, due to
-ENOMEM), the function logs an error and returns, leaving the port entirely
without a host flood rule. Should the driver revert to the previous
configuration on failure to avoid blackholing traffic meant for the CPU?