Re: INFO: possible circular locking dependency at cleanup_workqueue_thread

From: Alan Stern
Date: Sun May 24 2009 - 10:30:37 EST

On Sun, 24 May 2009, Rafael J. Wysocki wrote:

> The patch is appended for reference (Alan, please have a look; I can't recall
> why exactly we have called device_pm_lock() from the core suspend/hibernation
> code instead of acquiring the lock locally in drivers/base/power/main.c) and
> I'll attach it to the bug entry too.

I can't remember the reason either. Probably there wasn't any. The
patch looks fine, and it has the nice added benefit that now the only
user of device_pm_lock() will be device_move().

> ---
> From: Rafael J. Wysocki <rjw@xxxxxxx>
> Subject: PM: Do not hold dpm_list_mtx while disabling/enabling nonboot CPUs
> We shouldn't hold dpm_list_mtx while executing
> [disable|enable]_nonboot_cpus(), because theoretically this may lead
> to a deadlock as shown by the following example (provided by Johannes
> Berg):
> CPU 3 CPU 2 CPU 1
> suspend/hibernate
> something:
> rtnl_lock() device_pm_lock()
> -> mutex_lock(&dpm_list_mtx)
> mutex_lock(&dpm_list_mtx)
> linkwatch_work
> -> rtnl_lock()
> disable_nonboot_cpus()
> -> flush CPU 3 workqueue
> Fortunately, device drivers are supposed to stop any activities that
> might lead to the registration of new device objects and/or to the
> removal of the existing ones way before disable_nonboot_cpus() is

Strictly speaking, drivers are still allowed to unregister existing
devices. They are forbidden only to register new ones. This shouldn't
hurt anything, though.

> called, so it shouldn't be necessary to hold dpm_list_mtx over the
> entire late part of device suspend and early part of device resume.
> Thus, during the late suspend and the early resume of devices acquire
> dpm_list_mtx only when dpm_list is going to be traversed and release
> it right after that.

Acked-by: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx>

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at