Re: [PATCH 10/10 V2] workqueue: use generic attach/detach routine for rescuers

From: Tejun Heo
Date: Mon May 12 2014 - 18:05:15 EST


On Mon, May 12, 2014 at 02:56:22PM +0800, Lai Jiangshan wrote:
> There are several problems with the code that rescuers bind itself to the pool'
> cpumask
> 1) It uses a way different from the normal workers to bind to the cpumask
> So we can't maintain the normal/rescuer workers under the same framework.
> 2) The the code of cpu-binding for rescuer is complicated
> 3) If one or more cpuhotplugs happen while the rescuer processes the
> scheduled works, the rescuer may not be correctly bound to the cpumask of
> the pool. This is allowed behavior, but is not good. It will be better
> if the cpumask of the rescuer is always kept coordination with the pool
> across any cpuhotplugs.
>
> Using generic attach/detach routine will solve the above problems,
> and result much more simple code.
>
> Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
> static struct worker *alloc_worker(void)
> {
> struct worker *worker;
> @@ -2343,8 +2279,9 @@ repeat:
>
> spin_unlock_irq(&wq_mayday_lock);
>
> - /* migrate to the target cpu if possible */
> - worker_maybe_bind_and_lock(pool);
> + worker_attach_to_pool(rescuer, pool);
> +
> + spin_lock_irq(&pool->lock);
> rescuer->pool = pool;
>
> /*
> @@ -2357,6 +2294,11 @@ repeat:
> move_linked_works(work, scheduled, &n);
>
> process_scheduled_works(rescuer);
> + spin_unlock_irq(&pool->lock);
> +
> + worker_detach_from_pool(rescuer, pool);
> +
> + spin_lock_irq(&pool->lock);

Ah, right, this is how it's used. Yeah, it makes sense. In a long
patchset, it usually helps to mention your intentions when structuring
functions tho. When you're separating out detach_from_pool, just
mention that the function will later be used to make rescuers use the
same attach/detach framework as normal workers.

How has this been tested?

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/