[RFC PATCH V3 0/3] PM/CPU: Parallel enalbing nonboot cpus with resume devices
From: Lan Tianyu
Date: Thu Sep 25 2014 - 04:36:24 EST
This patchset is to parallel enabling nonboot cpus with resuming devices
during system resume in order to accelerate S2RAM. From test result on
a 8 logical core Haswell machine, system resume time reduces from 347ms
to 217ms with this patchset.
In the current world, all nonboot cpus are enabled serially during system
resume. System resume sequence is that boot cpu enables nonboot cpu one by
one and then resume devices. Before resuming devices, there are few tasks
assigned to nonboot cpus after they are brought up. This wastes cpu usage.
This patchset is to allow boot cpu to go forward to resume devices after
bringing up one nonboot cpu and starting a thread. The thread will be in
charge of bringing up other frozen cpus. The thread will be scheduled to
the first online cpu to run . This makes enabling cpu2~x parallel with
resuming devices.
Patch 2 is to change the policy of init MTRR/PAT for nonboot cpus. Original
code is to init all nonboot cpus' MTRR/PAT after all nonboot cpus coming online
during system resume. Now parallel enabling nonboot cpus with resuming devices and
nonboot cpus will be assigned with tasks before all cpus are online. So
it's necessary to do init MTRR/PAT just after one nonboot cpus comes online
just like dynamic single cpu online.
Patch 3 is to guarantee that all cpus are online before changing cpufreq_suspended
flag in the cpufreq_resume() to avoid breaking cpufreq subsystem.
Lan Tianyu (3):
PM/CPU: Parallel enalbing nonboot cpus with resume devices
X86/CPU: Initialize MTRR/PAT when each cpu is online during system
resume.
Cpufreq: Hold cpu_add_remove_lock before change cpufreq_suspended flag
drivers/cpufreq/cpufreq.c | 2 ++
kernel/cpu.c | 64 +++++++++++++++++++++++++++++++++++++++--------
2 files changed, 55 insertions(+), 11 deletions(-)
--
1.8.4.rc0.1.g8f6a3e5.dirty
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/