Re: [PATCH net-next] netdevsim: Register and unregister devlink traps on probe/remove device

From: Leon Romanovsky
Date: Wed Oct 27 2021 - 11:17:34 EST


On Wed, Oct 27, 2021 at 07:17:23AM -0700, Jakub Kicinski wrote:
> On Wed, 27 Oct 2021 08:56:45 +0300 Leon Romanovsky wrote:
> > On Tue, Oct 26, 2021 at 12:56:02PM -0700, Jakub Kicinski wrote:
> > > On Tue, 26 Oct 2021 22:30:23 +0300 Leon Romanovsky wrote:
> > > > No problem, I'll send a revert now, but what is your take on the direction?
> > >
> > > I haven't put in the time to understand the detail so I was hoping not
> > > to pass judgment on the direction. My likely unfounded feeling is that
> > > reshuffling ordering is not going to fix what is fundamentally a
> > > locking issue. Driver has internal locks it needs to hold both inside
> > > devlink callbacks and when registering devlink objects. We would solve
> > > a lot of the problems if those were one single lock instead of two.
> > > At least that's my recollection from the times I was actually writing
> > > driver code...
> >
> > Exactly, and this is what reshuffling of registrations does. It allows us
> > to actually reduce number of locks to bare minimum, so at least creation
> > and deletion of devlink objects will be locks free.
>
> That's not what I meant. I meant devlink should call in to take
> driver's lock or more likely driver should use the devlink instance
> mutex instead of creating its own. Most of the devlink helpers (with
> minor exceptions like alloc) should just assert that devlink instance
> lock is already held by the driver when called.
>
> > Latest changes already solved devlink reload issues for mlx5 eth side
> > and it is deadlock and lockdep free now. We still have deadlocks with
> > our IB part, where we obligated to hold pernet lock during registering
> > to net notifiers, but it is different discussion.
> >
> > > > IMHO, the mlxsw layering should be fixed. All this recursive devlink re-entry
> > > > looks horrible and adds unneeded complexity.
> > >
> > > If you're asking about mlxsw or bnxt in particular I wouldn't say what
> > > they do is wrong until we can point out bugs.
> >
> > I'm talking about mlxsw and pointed to the reentry to devlink over and over.
>
> To me "pointing to re-entry" read like breaking the new model you have
> in mind, not actual bug/race/deadlock etc. If that's not the case the
> explanation flew over my head :)

It doesn't break, but complicates without any reason.

Let me try to summarize my vision for the devlink. It is not written
in stone and changes after every review comment. :)

I want to divide all devlink APIs into two buckets:
1. Before devlink_register() - we don't need to worry about locking at
all. If caller decides to go crazy and wants parallel calls to devlink
in this stage, he/she will need to be responsible for proper locking.

2. After devlink_register() - users can send their commands through
netlink, so we need maximum protection. The devlink core will have
RW semaphore to make sure that every entry in this stage is marked
or read (allow parallel calls) or write (exclusive access). Plus we
have a barrier for devlink_unregister().

My goal is to move as much as possible in first bucket, and various
devlink_*_register() calls are natural candidates for it.

In addition, they are candidates because devlink is SW layer, in my
view everything or almost everything should be allocated during driver
init with const arrays and feature bits with clear separation between
devlink and driver beneath.

Call chains like "devlink->driver->devlink->driver.." that exist in
mlxsw can't be correct from layering POV.

The current implementation of devlink reload in mlxsw adds amazingly
large amount of over engineering to main devlink core, because it drags
many (constant) initializations to be in bucket #2.

Bottom line, convert devlink core to be similar to driver core. :)

Thanks