[PATCH V2 08/15] workqueue: only change worker->pool with pool lock held
From: Lai Jiangshan
Date: Mon Feb 18 2013 - 11:15:32 EST
We ensure these semantics:
worker->pool is set to pool if the worker is associated to the pool.
(normal worker is associated to its pool when created,
rescuer is associated to a pool dynamically.)
worker->pool is set to NULL if the worker is de-associated to the pool.
It is done with pool->lock held in ether set of above
Thus we have this semantic:
If pool->lock is held and worker->pool==pool, we can determine that
the worker is associated to the pool now.
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
---
kernel/workqueue.c | 3 ++-
kernel/workqueue_internal.h | 1 +
2 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b987195..9086a33 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2412,8 +2412,8 @@ repeat:
mayday_clear_cpu(cpu, wq->mayday_mask);
/* migrate to the target cpu if possible */
- rescuer->pool = pool;
worker_maybe_bind_and_lock(pool);
+ rescuer->pool = pool;
/*
* Slurp in all works issued via this workqueue and
@@ -2434,6 +2434,7 @@ repeat:
if (keep_working(pool))
wake_up_worker(pool);
+ rescuer->pool = NULL;
spin_unlock_irq(&pool->lock);
}
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index 3694bc1..1040abc 100644
--- a/kernel/workqueue_internal.h
+++ b/kernel/workqueue_internal.h
@@ -32,6 +32,7 @@ struct worker {
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
+ /* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
--
1.7.7.6
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/