RE: [E1000-devel] [PATCH 1/2] if_link: Add VF multicast promiscuous mode control

From: Hiroshi Shimamoto
Date: Tue Jan 20 2015 - 20:08:07 EST


> Subject: RE: [E1000-devel] [PATCH 1/2] if_link: Add VF multicast promiscuous mode control
>
>
>
> > -----Original Message-----
> > From: Hiroshi Shimamoto [mailto:h-shimamoto@xxxxxxxxxxxxx]
> > Sent: Tuesday, January 20, 2015 3:40 PM
> > To: BjÃrn Mork
> > Cc: e1000-devel@xxxxxxxxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; Choi, Sy
> > Jong; linux-kernel@xxxxxxxxxxxxxxx; Hayato Momma
> > Subject: Re: [E1000-devel] [PATCH 1/2] if_link: Add VF multicast promiscuous
> > mode control
> >
> > > Subject: Re: [PATCH 1/2] if_link: Add VF multicast promiscuous mode
> > > control
> > >
> > > Hiroshi Shimamoto <h-shimamoto@xxxxxxxxxxxxx> writes:
> > >
> > > > From: Hiroshi Shimamoto <h-shimamoto@xxxxxxxxxxxxx>
> > > >
> > > > Add netlink directives and ndo entry to control VF multicast promiscuous
> > mode.
> > > >
> > > > Intel ixgbe and ixgbevf driver can handle only 30 multicast MAC
> > > > addresses per VF. It means that we cannot assign over 30 IPv6
> > > > addresses to a single VF interface on VM. We want thousands IPv6
> > addresses in VM.
> > > >
> > > > There is capability of multicast promiscuous mode in Intel 82599 chip.
> > > > It enables all multicast packets are delivered to the target VF.
> > > >
> > > > This patch prepares to control that VF multicast promiscuous
> > functionality.
> > >
> > > Adding a new hook for this seems over-complicated to me. And it still
> > > doesn't solve the real problems that
> > > a) the user has to know about this limit, and
> > > b) manually configure the feature
> > >
> > > Most of us, lacking the ability to imagine such arbitrary hardware
> > > limitations, will go through a few hours of frustrating debugging
> > > before we figure this one out...
> > >
> > > Why can't the ixgbevf driver just automatically signal the ixgbe
> > > driver to enable multicast promiscuous mode whenever the list grows
> > > past the limit?
> >
> > I had submitted a patch to change ixgbe and ixgbevf driver for this issue.
> > https://lkml.org/lkml/2014/11/27/269
> >
> > The previous patch introduces API between ixgbe and ixgbevf driver to
> > enable multicast promiscuous mode, and ixgbevf enables it automatically if
> > the number of addresses is over than 30.
>
> I believe the issue is with allowing a VF to automatically enter Promiscuous Multicast without the PF's ok is concern

So you mean that we should take care about enabling VF multicast promiscuous mode
in host side, right? The host allows multicast promiscuous and VF requests it too,
then enables VF multicast promiscuous mode.
So, what is preferred way to do in host do you think?

> over VM isolation. Of course that isolation, when it comes to multicast, is rather limited anyway given that our multicast
> filter uses only 12-bit of the address for a match. Still this (or doing it by default) would only open that up considerably
> more (all multicasts). I assume for your application you're not concerned, but are there other use cases that would worry
> about such things?

Sorry I couldn't catch the point.

What is the issue? I think there is no difference for the users who don't want
many multicast addresses in guest. In the current implementation, overflowed
multicast addresses silently discarded in ixgbevf. I believe there is no user
who want to use over 30 multicast addresses now. If VF multicast promiscuous mode
is enabled in certain VF, the behavior of other VFs is not changed.

thanks,
Hiroshi

>
> Thanks,
> -Don Skidmore <donald.c.skidmore@xxxxxxxxx>
>
> >
> > I got some comment and I would like to clarify the point, but there was no
> > answer.
> > That's the reason I submitted this patch.
> >
> > Do you think a patch for the ixgbe/ixgbevf driver is preferred?
> >
> >
> > thanks,
> > Hiroshi
> >
> > >
> > > I'd also like to note that this comment in
> > > drivers/net/ethernet/intel/ixgbevf/vf.c
> > > indicates that the author had some ideas about how more than 30
> > > addresses could/should be handled:
> > >
> > > static s32 ixgbevf_update_mc_addr_list_vf(struct ixgbe_hw *hw,
> > > struct net_device *netdev)
> > > {
> > > struct netdev_hw_addr *ha;
> > > u32 msgbuf[IXGBE_VFMAILBOX_SIZE];
> > > u16 *vector_list = (u16 *)&msgbuf[1];
> > > u32 cnt, i;
> > >
> > > /* Each entry in the list uses 1 16 bit word. We have 30
> > > * 16 bit words available in our HW msg buffer (minus 1 for the
> > > * msg type). That's 30 hash values if we pack 'em right. If
> > > * there are more than 30 MC addresses to add then punt the
> > > * extras for now and then add code to handle more than 30 later.
> > > * It would be unusual for a server to request that many multi-cast
> > > * addresses except for in large enterprise network environments.
> > > */
> > >
> > >
> > >
> > > The last 2 lines of that comment are of course totally bogus and
> > > pointless and should be deleted in any case... It's obvious that 30
> > > multicast addresses is ridiculously low for lots of normal use cases.
> > >
> > >
> > > BjÃrn