[PATCH 3/4] workqueue: reuse the current default pwq when its attrs unchanged

From: Lai Jiangshan
Date: Wed Jun 03 2015 - 11:02:08 EST


When apply_wqattrs_prepare() is called, it is possible that the default
pwq is unaffected. It is always true that only the NUMA affinity is being
changed and sometimes true that the low level cpumask is being changed.

So we try to reuse the current default pwq when its attrs unchanged.

After this change, "ctx->dfl_pwq->refcnt++" could be dangerous
when ctx->dfl_pwq is a reusing pwq which may be receiving work items
or processing work items and hurts concurrency [get|put]_pwq(),
so we use get_pwq_unlocked() instead.

Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
---
kernel/workqueue.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 197520b..0c2f819 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3549,11 +3549,17 @@ apply_wqattrs_prepare(struct workqueue_struct *wq,
/*
* If something goes wrong during CPU up/down, we'll fall back to
* the default pwq covering whole @attrs->cpumask. Always create
- * it even if we don't use it immediately.
+ * it even if we don't use it immediately. Check and reuse the
+ * current default pwq if the @new_attrs equals the current one.
*/
- ctx->dfl_pwq = alloc_unbound_pwq(wq, new_attrs);
- if (!ctx->dfl_pwq)
- goto out_free;
+ if (wq->dfl_pwq && wqattrs_equal(new_attrs, wq->dfl_pwq->pool->attrs)) {
+ get_pwq_unlocked(wq->dfl_pwq);
+ ctx->dfl_pwq = wq->dfl_pwq;
+ } else {
+ ctx->dfl_pwq = alloc_unbound_pwq(wq, new_attrs);
+ if (!ctx->dfl_pwq)
+ goto out_free;
+ }

for_each_node(node) {
if (wq_calc_node_cpumask(new_attrs, node, -1, tmp_attrs->cpumask)) {
@@ -3568,7 +3574,7 @@ apply_wqattrs_prepare(struct workqueue_struct *wq,
}
ctx->pwq_tbl[node] = pwq;
} else {
- ctx->dfl_pwq->refcnt++;
+ get_pwq_unlocked(ctx->dfl_pwq);
ctx->pwq_tbl[node] = ctx->dfl_pwq;
}
}
--
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/