Re: [PATCH 04/10] workqueue: destroy worker directly in the idle timeout handler
From: Lai Jiangshan
Date: Wed May 07 2014 - 11:30:54 EST
On Wed, May 7, 2014 at 9:41 PM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> On Wed, May 07, 2014 at 09:38:39PM +0800, Lai Jiangshan wrote:
>> On Wed, May 7, 2014 at 9:12 PM, Tejun Heo <tj@xxxxxxxxxx> wrote:
>> > Hello, Lai.
>> >
>> > On Wed, May 07, 2014 at 03:10:20PM +0800, Lai Jiangshan wrote:
>> >> 1) complete() can't be called inside attach_mutex due to the worker
>> >> shouldn't access to the pool after complete().
>> >
>> > Sure, complete it after releasing the lock. Shutdown can't complete
>> > before the completion gets completed, right?
>> >
>> >> 2) put_unbound_pool() may called from get_unbound_pool(), we need to add
>> >> an additional check and avoid the wait_for_completion() if so.
>>
>> Do you accept if I remove put_unbound_pool() from get_unbound_pool()
>> and use several freeing code instead?
>
> Hah? How much extra complexity are we talking about? It's a single
> if, no?
DECLARE_COMPLETION_ONSTACK(completion);
#1
...
while ((worker = first_worker(pool))) {
destroy_worker(worker);
pool->detach_completion = &completion;
#2
}
...
unlock;
if (pool->detach_completion)
wait_for_completion();
#3
One thing is separated into 3 places and about 5~7lines.
I hope a single wait_for_completion() or single wait_event().
get_unbound_pool():
fail:
if (pool)
put_unbound_pool(pool);
I think we can change it into:
fail:
if (pool) {
if (pool->id >= 0)
idr_remove(&worker_pool_idr, pool->id);
call_rcu_sched(&pool->rcu, rcu_free_pool);
}
Thanks,
Lai
>
> --
> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/