RE: [RFC net-next 8/8] net/mlx5: Add subdev driver to bind to subdev devices

From: Parav Pandit
Date: Tue Mar 05 2019 - 12:58:07 EST




> -----Original Message-----
> From: Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx>
> Sent: Tuesday, March 5, 2019 1:14 AM
> To: Parav Pandit <parav@xxxxxxxxxxxx>
> Cc: netdev@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> michal.lkml@xxxxxxxxxxx; davem@xxxxxxxxxxxxx; Jiri Pirko
> <jiri@xxxxxxxxxxxx>; Jakub Kicinski <jakub.kicinski@xxxxxxxxxxxxx>
> Subject: Re: [RFC net-next 8/8] net/mlx5: Add subdev driver to bind to
> subdev devices
>
> On Fri, Mar 01, 2019 at 05:21:13PM +0000, Parav Pandit wrote:
> >
> >
> > > -----Original Message-----
> > > From: Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx>
> > > Sent: Friday, March 1, 2019 1:22 AM
> > > To: Parav Pandit <parav@xxxxxxxxxxxx>
> > > Cc: netdev@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > > michal.lkml@xxxxxxxxxxx; davem@xxxxxxxxxxxxx; Jiri Pirko
> > > <jiri@xxxxxxxxxxxx>
> > > Subject: Re: [RFC net-next 8/8] net/mlx5: Add subdev driver to bind
> > > to subdev devices
> > >
> > > On Thu, Feb 28, 2019 at 11:37:52PM -0600, Parav Pandit wrote:
> > > > Add a subdev driver to probe the subdev devices and create fake
> > > > netdevice for it.
> > >
> > > So I'm guessing here is the "meat" of the whole goal here?
> > >
> > > You just want multiple netdevices per PCI device? Why can't you do
> > > that today in your PCI driver?
> > >
> > Yes, but it just not multiple netdevices.
> > Let me please elaborate in detail.
> >
> > There is a swichdev mode of a PCI function for netdevices.
> > In this mode a given netdev has additional control netdev (called
> representor netdevice = rep-ndev).
> > This rep-ndev is attached to OVS for adding rules, offloads etc using
> standard tc, netfilter infra.
> > Currently this rep-ndev controls switch side of the settings, but not the
> host side of netdev.
> > So there is discussion to create another netdev or devlink port..
> >
> > Additionally this subdev has optional rdma device too.
> >
> > And when we are in switchdev mode, this rdma dev has similar rdma rep
> device for control.
> >
> > In some cases we actually don't create netdev when it is in InfiniBand
> mode.
> > Here there is PCI device->rdma_device.
> >
> > In other case, a given sub device for rdma is dual port device, having
> netdevice for each that can use existing netdev->dev_port.
> >
> > Creating 4 devices of two different classes using one iproute2/ip or
> iproute2/rdma command is horrible thing to do.
>
> Why is that?
>
When user creates the device, user tool needs to return a device handle that got created.
Creating multiple devices doesn't make sense. I haven't seen any tool doing such crazy thing.

> > In case if this sub device has to be a passthrough device, ip link command
> will fail badly that day, because we are creating some sub device which is not
> even a netdevice.
>
> But it is a network device, right?
>
When there is passthrough subdevice, there won't be netdevice created.
We don't want to create passthrough subdevice using iproute2/ip tool which primarily works on netdevices.

> > So iproute2/devlink which works on bus+device, mainly PCI today, seems
> right abstraction point to create sub devices.
> > This also extends to map ports of the device, health, registers debug, etc
> rich infrastructure that is already built.
> >
> > Additionally, we don't want mlx driver and other drivers to go through its
> child devices (split logic in netdev and rdma) for power management.
>
> And how is power management going to work with your new devices? All
> you have here is a tiny shim around a driver bus,
So subdevices power management is done before their parent's.
Vendor driver doesn't need to iterate its child devices to suspend/resume it.

> I do not see any new
> functionality, and as others have said, no way to actually share, or split up,
> the PCI resources.
>
devlink tool create command will be able to accept more parameters during device creation time to share and split PCI resources.
This is just the start of the development and RFC is to agree on direction.
devlink tool has parameters options that can be queried/set and existing infra will be used for granular device config.

> > Kernel core code does that well today, that we like to leverage through
> subdev bus or mfd pm callbacks.
> >
> > So it is lot more than just creating netdevices.
>
> But that's all you are showing here :)
>
Starting use case is netdev and rdma, but we don't want to create new tools few months/a year later for passthrough mode or for different link layers etc.

> > > What problem are you trying to solve that others also are having
> > > that requires all of this?
> > >
> > > Adding a new bus type and subsystem is fine, but usually we want
> > > more than just one user of it, as this does not really show how it
> > > is exercised very well.
> > This subdev and devlink infrastructure solves this problem of creating
> smaller sub devices out of one PCI device.
> > Someone has to start.. :-)
>
> That is what a mfd should allow you to do.
>
I did cursory look at mfd.
It lacks removing specific devices, but that is small. It can be enhanced to remove specific mfd device.

> > To my knowledge, currently Netronome, Broadcom and Mellanox are
> actively using this devlink and switchdev infra today.
>
> Where are they "using it"? This patchset does not show that.
>
devlink and swhichdev mode for SRIOV is common among these vendors and more.
The code is in,
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
drivers/net/ethernet/netronome/nfp/nfp_net_main.c
drivers/net/ethernet/mellanox/mlx5/core/main.c

This patchset covers only mlx5, but other vendors who also intent to create subdevices will be able to reuse it.
This RFC doesn't cover other vendors.
Jakub and netdev list in CC. We are discussing with Jakub in this patchset discussion.

> > > Ideally 3 users would be there as that is when it proves itself that
> > > it is flexible enough.
> > >
> >
> > We were looking at drivers/visorbus if we can repurpose it, but GUID
> device naming scheme is just not user friendly.
>
> You can always change the naming scheme if needed. But why isn't a GUID
> ok?
I think it was ok.
vendor-device id scheme seems more user friendly and in kernels control, also fits with existing modpost tools.
GUID can be used instead of vendor, device id.
However visorbus is tied to acpi and device life cycle is very different under workqueue handlering.
It is also meant for a vendor s-Par devices.
Its guest drivers are in staging without a clear roadmap for more than year now.
So do not want to depend on it. mfd or dedicated bus seems better fit.

> It's very easy to reserve properly, and you do not need a central naming
> "authority".
>
> > > Would just using the mfd subsystem work better for you? That
> > > provides core support for "multi-function" drivers/devices already.
> > > What is missing from that subsystem that does not work for you here?
> > >
> > We were not aware of mfd until now. I looked at very high level now. It's a
> wrapper to platform devices and seems widely use.
> > Before subdev proposal, Jason suggested an alternative is to create
> platform devices and driver attach to it.
> >
> > When I read kernel documentation [1], it says "platform devices typically
> appear as autonomous entities"
> > Here instead of autonomy, it is in user's control.
> > Platform devices probably don't disappear a lot in live system as opposed
> to subdevices which are created and removed dynamically a lot often.
> >
> > Not sure if platform device is abuse for this purpose or not.
>
> No, do not abuse a platform device.
Yes. that is my point mfd devices are platform devices.
mfd creates platform devices. and to match to it, platfrom_register_driver() have to be called to bind to it.
I do not know currently if we have the flexibility to say that instead of binding X driver, bind Y driver for platform devices.

> You should be able to just use a normal
> PCI device for this just fine, and if not, we should be able to make the
> needed changes to mfd for that.
>
Ok. so parent pci device and mfd devices.
mfd seems to fit this use case.
Do you think 'Platform devices' section is stale in [1] for autonomy, host bridge, soc platform etc points?
Should we update the documentation to indicate that it can be used for non-autonomous, user created devices and it can be used for creating devices on top of PCI parent device etc?

[1] https://www.kernel.org/doc/Documentation/driver-model/platform.txt