Please always mention Version number and history. Not everybodyYour right. I was rushing and forgot.
remembers what changed after last version.
Documentation/spinlocks.txt:84
On 3 April 2013 20:33, Nathan Zimmer <nzimmer@xxxxxxx> wrote:We eventually would like to remove the rwlock cpufreq_driver_lock or convertWhy do we want to convert it back to spinlock?
it back to a spinlock and protect the read sections with RCU. The first step in
Will do.diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.cName it have_governor_per_policy, it looks wrong otherwise.
bool have_governor_per_policy(void)
{
- return cpufreq_driver->have_governor_per_policy;
+ bool have_governor;
+ rcu_read_lock();
+ have_governor = rcu_dereference(cpufreq_driver)->have_governor_per_policy;
+ rcu_read_unlock();
+ return have_governor;
}
Ok I'll fix this spot.static ssize_t show_scaling_driver(struct cpufreq_policy *policy, char *buf)This is the definition of struct cpufreq_driver:
{
- return scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n", cpufreq_driver->name);
+ char *name;
+ rcu_read_lock();
+ name = rcu_dereference(cpufreq_driver)->name;
+ rcu_read_unlock();
+ return scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n", name);
}
struct cpufreq_driver {
struct module *owner;
char name[CPUFREQ_NAME_LEN];
...
};
Purpose of rcu read_lock/unlock are to define the rcu critical section
after which rcu layer is free to free the memory allocated to earlier
instance of cpufreq_driver.
So, after the unlock() call you _should_not_ use the memory allocated to
cpufreq_driver instance. And here, you are using memory allocated to name[]
after the unlock() call.
Which looks to be wrong... I left other parts of driver upto you to fix for thisIn places like show_bios_limit and cpufreq_add_dev_interface we know that the memory will still
"rule of thumb".
Sorry for not pointing this earlier but rcu is as new to me as it isMaking a learners mistake isn't really discouraging to me, even when I do it twice.
to you. I know
you must be frustrated with so many versions of this patch, and everytime we
get a new problem to you... Don't get disheartened with it.. Keep the good work
going :)
--
viresh