swsusp: add locking to software_resume

From: Pavel Machek
Date: Tue Aug 02 2005 - 06:28:59 EST


From: Shaohua Li <shaohua.li@xxxxxxxxx>

It is trying to protect swsusp_resume_device and software_resume()
from two users banging it from userspace at the same time.

Signed-off-by: Shaohua Li <shaohua.li@xxxxxxxxx>
Signed-off-by: Pavel Machek <pavel@xxxxxxx>

---
commit c1d6e115ea6f797563fe6873de25892cb16a309e
tree 53b5eead95da859baf820be52a62b2ddab007c29
parent 86fa9d8a44c633603139b427c160ed1cdd41c6ce
author <pavel@Elf.(none)> Tue, 02 Aug 2005 13:19:38 +0200
committer <pavel@Elf.(none)> Tue, 02 Aug 2005 13:19:38 +0200

kernel/power/disk.c | 10 +++++++++-
1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/kernel/power/disk.c b/kernel/power/disk.c
--- a/kernel/power/disk.c
+++ b/kernel/power/disk.c
@@ -233,9 +233,12 @@ static int software_resume(void)
{
int error;

+ down(&pm_sem);
if (!swsusp_resume_device) {
- if (!strlen(resume_file))
+ if (!strlen(resume_file)) {
+ up(&pm_sem);
return -ENOENT;
+ }
swsusp_resume_device = name_to_dev_t(resume_file);
pr_debug("swsusp: Resume From Partition %s\n", resume_file);
} else {
@@ -248,6 +251,7 @@ static int software_resume(void)
* FIXME: If noresume is specified, we need to find the partition
* and reset it back to normal swap space.
*/
+ up(&pm_sem);
return 0;
}

@@ -284,6 +288,8 @@ static int software_resume(void)
Cleanup:
unprepare_processes();
Done:
+ /* For success case, the suspend path will release the lock */
+ up(&pm_sem);
pr_debug("PM: Resume from disk failed.\n");
return 0;
}
@@ -390,7 +396,9 @@ static ssize_t resume_store(struct subsy
if (sscanf(buf, "%u:%u", &maj, &min) == 2) {
res = MKDEV(maj,min);
if (maj == MAJOR(res) && min == MINOR(res)) {
+ down(&pm_sem);
swsusp_resume_device = res;
+ up(&pm_sem);
printk("Attempting manual resume\n");
noresume = 0;
software_resume();

--
teflon -- maybe it is a trademark, but it should not be.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/