Re: [PATCH 19/19] workqueue: implement concurrency managed workqueue
From: Frédéric Weisbecker
Date: Fri Oct 02 2009 - 10:29:00 EST
2009/10/1 Tejun Heo <tj@xxxxxxxxxx>:
> Currently each workqueue has its own dedicated worker pool. This
> causes the following problems.
>
> * Works which are dependent on each other can cause a deadlock by
> depending on the same execution resource. This is bad because this
> type of dependency is quite difficult to find.
>
> * Works which may sleep and take long time to finish need to have
> separate workqueues so that it doesn't block other works. Similarly
> works which want to be executed in timely manner often need to
> create it custom workqueue too to avoid being blocked by long
> running ones. This leads to large number of workqueues and thus
> many workers.
>
> * The static one-per-cpu worker isn't good enough for jobs which
> require higher level of concurrency necessiating other worker pool
> mechanism. slow-work and async are good examples and there are also
> some custom implementations buried in subsystems.
>
> * Combined, the above factors lead to many workqueues with large
> number of dedicated and mostly unused workers. This also makes work
> processing less optimal as the dedicated workers end up switching
> among themselves costing scheduleing overhead and wasting cache
> footprint for their stacks and as the system gets busy, these
> workers end up competing with each other.
>
> To solve the above issues, this patch implements concurrency-managed
> workqueue.
>
> There is single global cpu workqueue (gcwq) for each cpu which serves
> all the workqueues. gcwq maintains single pool of workers which is
> shared by all cwqs on the cpu.
>
> gcwq keeps the number of concurrent active workers to minimum but no
> less. As long as there's one or more running workers on the cpu, no
> new worker is scheduled so that works can be processed in batch as
> much as possible but when the last running worker blocks, gcwq
> immediately schedules new worker so that the cpu doesn't sit idle
> while there are works to be processed.
That's really a cool thing.
So once such new workers are created, what's the state/event that triggers their
destruction?
Is it the following, propagated recursively?
Worker A blocks.
B is created.
B has just finished a worklet and A has been woken up
Then destroy B
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/