Re: [RFC PATCH(Experimental) 2/4] Revert changes to workqueue.c
From: Oleg Nesterov
Date: Fri Feb 16 2007 - 13:45:58 EST
On 02/16, Srivatsa Vaddagiri wrote:
>
> 2.6.20-mm1 (cwq->should_stop)
> =============================
>
> static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
> {
> struct wq_barrier barr;
> int alive = 0;
>
> spin_lock_irq(&cwq->lock);
> if (cwq->thread != NULL) {
> insert_wq_barrier(cwq, &barr, 1);
> cwq->should_stop = 1;
> alive = 1;
> }
> spin_unlock_irq(&cwq->lock);
>
> if (alive) {
> wait_for_completion(&barr.done);
>
> while (unlikely(cwq->thread != NULL))
> cpu_relax();
> /*
> * Wait until cwq->thread unlocks cwq->lock,
> * it won't touch *cwq after that.
> */
> smp_rmb();
> spin_unlock_wait(&cwq->lock);
> }
> }
>
> Patch (based on kthread_should_stop)
> ====================================
>
> static void cleanup_workqueue_thread(struct workqueue_struct *wq, int cpu)
> {
> struct cpu_workqueue_struct *cwq = per_cpu_ptr(wq->cpu_wq, cpu);
>
> if (cwq->thread != NULL) {
> kthread_stop(cwq->thread);
> cwq->thread = NULL;
> }
> }
>
> > No more changes are required, cwq_should_stop() just works
> > because it is more flexible than kthread_should_stop().
>
> What is more flexible abt cwq_should_stop()?
- it doesn't use a global semaphore
- it works with or without freezer
- it works with or without take_over_work()
- it doesn't require that we have no pending works when
cleanup_workqueue_thread() is called.
- worker_thread() doesn't need to have 2 different conditions
to exit in 2 different ways.
- it allows us to do further improvements (don't take workqueue
mutex for the whole cpu-hotplug event), but this needs more work
and probably is not valid any longer if we use freezer.
Ok. This is a matter of taste. I will not argue if you send a patch
to convert the code to use kthread_stop() again (if it is correct :),
but let it be a separate change, please.
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/