Re: [PATCH v5 0/2] Synchronize DT overlay removal with devlink removals
From: Greg Kroah-Hartman
Date: Thu Apr 11 2024 - 09:23:32 EST
On Fri, Mar 08, 2024 at 02:29:28PM -0800, Saravana Kannan wrote:
> On Fri, Mar 8, 2024 at 12:05 PM Rob Herring <robh@xxxxxxxxxx> wrote:
> >
> > On Thu, Mar 07, 2024 at 12:09:59PM +0100, Herve Codina wrote:
> > > Hi,
> > >
> > > In the following sequence:
> > > of_platform_depopulate(); /* Remove devices from a DT overlay node */
> > > of_overlay_remove(); /* Remove the DT overlay node itself */
> > >
> > > Some warnings are raised by __of_changeset_entry_destroy() which was
> > > called from of_overlay_remove():
> > > ERROR: memory leak, expected refcount 1 instead of 2 ...
> > >
> > > The issue is that, during the device devlink removals triggered from the
> > > of_platform_depopulate(), jobs are put in a workqueue.
> > > These jobs drop the reference to the devices. When a device is no more
> > > referenced (refcount == 0), it is released and the reference to its
> > > of_node is dropped by a call to of_node_put().
> > > These operations are fully correct except that, because of the
> > > workqueue, they are done asynchronously with respect to function calls.
> > >
> > > In the sequence provided, the jobs are run too late, after the call to
> > > __of_changeset_entry_destroy() and so a missing of_node_put() call is
> > > detected by __of_changeset_entry_destroy().
> > >
> > > This series fixes this issue introducing device_link_wait_removal() in
> > > order to wait for the end of jobs execution (patch 1) and using this
> > > function to synchronize the overlay removal with the end of jobs
> > > execution (patch 2).
> > >
> > > Compared to the previous iteration:
> > > https://lore.kernel.org/linux-kernel/20240306085007.169771-1-herve.codina@xxxxxxxxxxx/
> > > this v5 series:
> > > - Remove a 'Fixes' tag
> > > - Update a comment
> > > - Add 'Tested-by' and ''Reviewed-by' tags
> > >
> > > This series handles cases reported by Luca [1] and Nuno [2].
> > > [1]: https://lore.kernel.org/all/20231220181627.341e8789@booty/
> > > [2]: https://lore.kernel.org/all/20240205-fix-device-links-overlays-v2-2-5344f8c79d57@xxxxxxxxxx/
> > >
> > > Best regards,
> > > Hervé
> > >
> > > Changes v4 -> v5
> > > - Patch 1
> > > Remove the 'Fixes' tag
> > > Add 'Tested-by: Luca Ceresoli <luca.ceresoli@xxxxxxxxxxx>'
> > > Add 'Reviewed-by: Nuno Sa <nuno.sa@xxxxxxxxxx>'
> > >
> > > - Patch 2
> > > Update comment as suggested
> > > Add 'Reviewed-by: Saravana Kannan <saravanak@xxxxxxxxxx>'
> > > Add 'Tested-by: Luca Ceresoli <luca.ceresoli@xxxxxxxxxxx>'
> > > Add 'Reviewed-by: Nuno Sa <nuno.sa@xxxxxxxxxx>'
> > >
> > > Changes v3 -> v4
> > > - Patch 1
> > > Uses flush_workqueue() instead of drain_workqueue().
> > >
> > > - Patch 2
> > > Remove unlock/re-lock when calling device_link_wait_removal()
> > > Move device_link_wait_removal() call to of_changeset_destroy()
> > > Update commit log
> > >
> > > Changes v2 -> v3
> > > - Patch 1
> > > No changes
> > >
> > > - Patch 2
> > > Add missing device.h
> > >
> > > Changes v1 -> v2
> > > - Patch 1
> > > Rename the workqueue to 'device_link_wq'
> > > Add 'Fixes' tag and Cc stable
> > >
> > > - Patch 2
> > > Add device.h inclusion.
> > > Call device_link_wait_removal() later in the overlay removal
> > > sequence (i.e. in free_overlay_changeset() function).
> > > Drop of_mutex lock while calling device_link_wait_removal().
> > > Add 'Fixes' tag and Cc stable
> > >
> > > Herve Codina (2):
> > > driver core: Introduce device_link_wait_removal()
> > > of: dynamic: Synchronize of_changeset_destroy() with the devlink
> > > removals
> > >
> > > drivers/base/core.c | 26 +++++++++++++++++++++++---
> > > drivers/of/dynamic.c | 12 ++++++++++++
> > > include/linux/device.h | 1 +
> > > 3 files changed, 36 insertions(+), 3 deletions(-)
> >
> > This looks good to me. I can take this given the user is DT. Looking for
> > a R-by from Saravana and Ack from Greg. A R-by from Rafael would be
> > great too.
>
> Reviewed-by: Saravana Kannan <saravanak@xxxxxxxxxx>
Reviewed-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>