RE: [PATCH v4 net-next 04/14] net: enetc: add MAC filter for i.MX95 ENETC PF

From: Wei Fang
Date: Tue Mar 18 2025 - 05:51:12 EST


> On Tue, Mar 18, 2025 at 05:19:51AM +0200, Wei Fang wrote:
> > You are right, but I'm afraid of the Coverity will report an issue, because the
> > pf->mac_list and pf->num_mfe are protected by the mac_list_lock in other
> > functions. And enetc4_pf_destroy_mac_list() will be called in other function
> > in the subsequent patches. I don't think it is unnecessary.
>
> Sorry, but I can only take the presented code at face value. If the
> Coverity tool signals an issue, you can still triage it and explain why
> it is a false positive. Or, if it is a real issue, you can add locking
> and provide a good justification for it. But the justification is
> missing now.
>
> > So for your question about Rx packet loss, although it is a very corner
> > case, the solution I can think of is that we can use temporary MAC hash
> > filters before deleting MAFT entries and delete them after adding the
> > MAFT entries. Can you accept this proposal?
>
> That sounds good. I'm thinking, for each MAC address filter, maybe you
> need an information whether it is programmed to hardware as an exact
> match filter or a hash filter, and make use of that functionality here:
> temporarily make all filters be hash-based ones, and then see how many
> you can convert to exact matches. With something like this, it should
> also be easier to maximize the use of the exact match table. Currently,
> AFAIU, if you have 5 MAC address filters, they will all be stored as hash
> filters, even if 4 of them could have been exact matches. Maybe that can
> also be improved with such extra information.
>

Currently, I don't want to make the driver too complicated, I think if the number
of MACs exceeds the MAFT's capability, we just use hash filter. Otherwise, we
use MAFT.

> > > > +static int enetc4_pf_set_mac_exact_filter(struct enetc_pf *pf, int type)
> > > > +{
> > > > + int max_num_mfe = pf->caps.mac_filter_num;
> > > > + struct net_device *ndev = pf->si->ndev;
> > > > + struct enetc_mac_addr *mac_tbl;
> > > > + struct netdev_hw_addr *ha;
> > > > + u8 si_mac[ETH_ALEN];
> > > > + int mac_cnt = 0;
> > > > + int err;
> > > > +
> > > > + mac_tbl = kcalloc(max_num_mfe, sizeof(*mac_tbl), GFP_KERNEL);
> > >
> > > Can't you know ahead of time, based on netdev_uc_count(), whether you
> > > will have space for exact match filters, and avoid unnecessary
> > > allocations if not? enetc_mac_list_is_available() seems way too late.
> >
> > I can add a check before alloc mac_tbl, but enetc_mac_list_is_available()
> > is still needed, because enetc4_pf_add_si_mac_exact_filter() is a common
> > function for all Sis, not only for PSI.
>
> From the way in which the discussion is progressing in the replies
> above, it sounds to me like maybe this logic will change a bit more.
>
> > > > +static int enetc4_pf_wq_task_init(struct enetc_si *si)
> > > > +{
> > > > + char wq_name[24];
> > > > +
> > > > + INIT_WORK(&si->rx_mode_task, enetc4_pf_do_set_rx_mode);
> > > > + snprintf(wq_name, sizeof(wq_name), "enetc-%s",
> pci_name(si->pdev));
> > > > + si->workqueue = create_singlethread_workqueue(wq_name);
> > > > + if (!si->workqueue)
> > > > + return -ENOMEM;
> > > > +
> > > > + return 0;
> > > > +}
> > >
> > > Naming scheme is inconsistent here: the function is called "pf" but
> > > takes "si" as argument. Same for enetc4_pf_do_set_rx_mode() where the
> > > rx_mode_task is part of the station interface structure.
> >
> > So change 'pf' to 'psi'?
>
> Sounds better.
>
> > > > + struct hlist_head mac_list; /* MAC address filter table */
> > >
> > > One thing I don't understand is why, given this implementation and
> > > final effect, you even bother to keep the mac_list persistently in
> > > struct enetc_pf. You have:
> > >
> > > enetc4_pf_do_set_rx_mode()
> > > -> enetc4_pf_flush_si_mac_exact_filter(ENETC_MAC_FILTER_TYPE_ALL)
> > > -> hlist_for_each_entry_safe(&pf->mac_list)
> > > -> enetc_mac_list_del_entry()
> > >
> > > which practically deletes all &pf->mac_list elements every time.
> > > So why even store them persistently in the first place? Why not just
> > > create an on-stack INIT_HLIST_HEAD() list?
> >
> > The enetc4_pf_add_si_mac_exact_filter() and
> > enetc4_pf_add_si_mac_exact_filter() are used for all Sis, but only
> > PF can access MAFT, and PSI and VSIs may share the same MAFT
> > entry, so we need to store them in struct enetc_pf. Although we
> > have not added VFs support yet, for such shared functions, we
> > should design its implementation from the beginning, rather than
> > modifying them when we add the VFs support.
>
> Ok. We need to find a way in which the code also makes sense today
> (who knows how much time will pass until VSIs are also supported in the
> mainline kernel - we all hope "as soon as possible" but have to plan for
> the worst). I don't disagree with the existence of &pf->mac_list,
> but it seems slightly inconsistent with the fact that you rebuild it
> (for now, completely, but I understand that it the future it will be
> just partially) each time ndo_set_rx_mode() is called.
>
> Are you aware of __dev_uc_sync() / __dev_mc_sync()? They are helpers

All I can say that I have been saw them in kernel, but I did not spend time
to study them.

> with explicit sync/unsync callbacks per address, so you don't have to
> manually walk using netdev_for_each_uc_addr() / netdev_for_each_mc_addr()
> each time, and instead act only on the delta. I haven't thought this
> suggestion through, but with you mentioning future VSI mailbox support
> for MAC filtering, maybe it is helpful if the PSI's MAC filters are also
> structured in this way.