workqueue: useless call of smp_mb() is removed from insert_work
From: Canjiang Lu
Date: Fri Feb 26 2021 - 14:08:58 EST
When worker is going to sleep, check whether an new idle worker should be
kicked is protected by pool->lock. Since insert_work is also protected by
pool->lock, both parts are serialized. The original lock-less design doesn't
make sense anymore and we can remove the call of smp_mb() from insert_work.
Related comments are also removed.
Signed-off-by: Canjiang Lu <craftsfish@xxxxxxx>
---
kernel/workqueue.c | 20 --------------------
1 file changed, 20 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9880b6c0e272..861f23a6f1ba 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -883,18 +883,6 @@ void wq_worker_sleeping(struct task_struct *task)
worker->sleeping = 1;
raw_spin_lock_irq(&pool->lock);
-
- /*
- * The counterpart of the following dec_and_test, implied mb,
- * worklist not empty test sequence is in insert_work().
- * Please read comment there.
- *
- * NOT_RUNNING is clear. This means that we're bound to and
- * running on the local cpu w/ rq lock held and preemption
- * disabled, which in turn means that none else could be
- * manipulating idle_list, so dereferencing idle_list without pool
- * lock is safe.
- */
if (atomic_dec_and_test(&pool->nr_running) &&
!list_empty(&pool->worklist)) {
next = first_idle_worker(pool);
@@ -1334,14 +1322,6 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
set_work_pwq(work, pwq, extra_flags);
list_add_tail(&work->entry, head);
get_pwq(pwq);
-
- /*
- * Ensure either wq_worker_sleeping() sees the above
- * list_add_tail() or we see zero nr_running to avoid workers lying
- * around lazily while there are works to be processed.
- */
- smp_mb();
-
if (__need_more_worker(pool))
wake_up_worker(pool);
}
--
2.17.1