Re: CPUFREQ-CPUHOTPLUG: Possible circular locking dependency
From: Gautham R Shenoy
Date: Wed Nov 29 2006 - 23:29:01 EST
On Wed, Nov 29, 2006 at 01:05:56PM -0800, Andrew Morton wrote:
> On Wed, 29 Nov 2006 20:54:04 +0530
> Gautham R Shenoy <ego@xxxxxxxxxx> wrote:
>
> > Ok, so to cut the long story short,
> > - While changing governor from anything to
> > ondemand, locks are taken in the following order
> >
> > policy->lock ===> dbs_mutex ===> workqueue_mutex.
> >
> > - While offlining a cpu, locks are taken in the following order
> >
> > cpu_add_remove_lock ==> sched_hotcpu_mutex ==> workqueue_mutex ==
> > ==> cache_chain_mutex ==> policy->lock.
>
> What functions are taking all these locks? (ie: the callpath?)
While changing cpufreq governor to ondemand, the locks taken are:
--------------------------------------------------------------------------
lock function file
--------------------------------------------------------------------------
policy->lock store_scaling_governor drivers/cpufreq/cpufreq.c
dbs_mutex cpufreq_governor_dbs drivers/cpufreq/cpufreq_ondemand.c
workqueue_mutex __create_workqueue kernel/workqueue.c
--------------------------------------------------------------------------
The complete callpath would be
store_scaling_governor [*]
|
__cpufreq_set_policy
|
__cpufreq_governor(data, CPUFREQ_GOV_START)
|
policy->governor->governor => cpufreq_governor_dbs(data, CPUFREQ_GOV_START) [*]
|
create_workqueue #defined as __create_workqueue [*]
where [*] = locks taken.
While offlining a cpu, locks are taken in the following order:
--------------------------------------------------------------------------
lock function file
--------------------------------------------------------------------------
cpu_add_remove_lock cpu_down kernel/cpu.c
sched_hotcpu_mutex migration_call kernel/sched.c
workqueue_mutex workqueue_cpu_callback kernel/workqueue.c
cache_chain_mutex cpuup_callback mm/slab.c
policy->lock cpufreq_driver_target drivers/cpufreq/cpufreq.c
---------------------------------------------------------------------------
Please note that in the above,
- sched_hotcpu_mutex, workqueue_mutex, cache_chain_mutex are taken
while handling CPU_LOCK_ACQUIRE events in the respective subsystems'
cpu_callback functions.
- policy->lock is taken while handling CPU_DOWN_PREPARE in
cpufreq_cpu_callback which calls cpufreq_driver_target.
It's perfectly clear that in the cpu offline callpath, cpufreq
does not have to do anything with the workqueue.
So can we ignore this circular-dep warning as a false positive?
Or is there a way to exploit this circular dependency ?
At the moment, I cannot think of way to exploit this circular dependency
unless we do something like try destroying the created workqueue when the
cpu is dead, i.e make the cpufreq governors cpu-hotplug-aware.
(eeks! that doesn't look good)
I'm working on fixing this. Let me see if I can come up with something.
Thanks and Regards
gautham.
--
Gautham R Shenoy
Linux Technology Center
IBM India.
"Freedom comes with a price tag of responsibility, which is still a bargain,
because Freedom is priceless!"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/