Re: [PATCH v2] cpufreq: Don't destroy/realloc policy/sysfs on hotplug/suspend
Date: Fri Jul 11 2014 - 06:03:06 EST
Srivatsa S. Bhat wrote:
> On 07/11/2014 09:48 AM, Saravana Kannan wrote:
>> The CPUfreq driver moves the cpufreq policy ownership between CPUs when
>> CPUs within a cluster (CPUs sharing same policy) go ONLINE/OFFLINE. When
>> moving policy ownership between CPUs, it also moves the cpufreq sysfs
>> directory between CPUs and also fixes up the symlinks of the other CPUs
>> the cluster.
>> Also, when all the CPUs in a cluster go OFFLINE, all the sysfs nodes and
>> directories are deleted, the kobject is released and the policy is
>> And when the first CPU in a cluster comes up, the policy is reallocated
>> initialized, kobject is acquired, the sysfs nodes are created or
>> All these steps end up creating unnecessarily complicated code and
>> There's no real benefit to adding/removing/moving the sysfs nodes and
>> policy between CPUs. Other per CPU sysfs directories like power and
>> are left alone during hotplug. So there's some precedence to what this
>> patch is trying to do.
>> This patch simplifies a lot of the code and locking by removing the
>> adding/removing/moving of policy/sysfs/kobj and just leaves the cpufreq
>> directory and policy in place irrespective of whether the CPUs are
>> Leaving the policy, sysfs and kobject in place also brings these
>> * Faster suspend/resume.
>> * Faster hotplug.
>> * Sysfs file permissions maintained across hotplug without userspace
>> * Policy settings and governor tunables maintained across suspend/resume
>> and hotplug.
>> * Cpufreq stats would be maintained across hotplug for all CPUs and can
>> queried even after CPU goes OFFLINE.
>> Change-Id: I39c395e1fee8731880c0fd7c8a9c1d83e2e4b8d0
>> Tested-by: Stephen Boyd <sboyd@xxxxxxxxxxxxxx>
>> Signed-off-by: Saravana Kannan <skannan@xxxxxxxxxxxxxx>
>> Preliminary testing has been done. cpufreq directories are getting
>> properly. Online/offline of CPUs work. Policies remain unmodifiable from
>> userspace when all policy CPUs are offline.
>> Error handling code has NOT been updated.
>> I've added a bunch of FIXME comments next to where I'm not sure about
>> locking in the existing code. I believe most of the try_lock's were
>> to prevent a deadlock between sysfs lock and the cpufreq locks. Now that
>> the sysfs entries are not touched after creating them, we should be able
>> replace most/all of these try_lock's with a normal lock.
>> This patch has more room for code simplification, but I would like to
>> some acks for the functionality and this code before I do further
> The idea behind this work is very welcome indeed! IMHO, there is nothing
> conceptually wrong in maintaining the per-cpu sysfs files across CPU
> (as long as we take care to return appropriate error codes if userspace
> tries to set values using the control files of offline CPUs). So, it
> boils down to whether or not we get the implementation right; the idea
> looks fine as of now. Hence, your efforts in making this patch(set) easier
> review will certainly help. Perhaps you can simplify the code later, but
> this point, splitting up this patch into multiple smaller, reviewable
> (accompanied by well-written changelogs that explain the intent) is the
> priority. Just like Viresh, even I had a hard time reviewing all of this
> one go.
> Thank you for taking up this work!
Thanks for the support. I'll keep in mind to keep the patches simple and
not do unnecessary optimizations. But the first patch diff unfortunately
is going to be a bit big since it'll delete a lot of code. :( But I'll add
more detailed commit text or "cover" text in the next one. I don't want to
split up the patch so much that individual ones don't compile or boot.
Maybe after patch v3, if you guys can suggest splitting it up into chunks
that won't involve huge rewrites, I can try to do that.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/