Re: [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface
From: Frederic Weisbecker
Date: Thu Apr 03 2014 - 11:59:58 EST
On Mon, Mar 31, 2014 at 09:15:26PM +0800, Lai Jiangshan wrote:
> On 03/31/2014 08:50 PM, Lai Jiangshan wrote:
>
> Sorry, I'm wrong.
> Tejun had told there is only one default worker pool for ordered workqueues.
>
> It is true. But this pool may share with other non-ordered workqueues which
> maybe have WQ_SYSFS. we should split this pool into two pools.
> one for ordered wqs, one for the rest.
Ah good point, there is a side effect here. Ok the idea of a low level unbound cpumask
should solve that issue as we want every unbound workqueues to be concerned.
[...]
> >> +/* Must be called with wq_unbound_mutex held */
> >> +static int wq_anon_cpumask_set_all(cpumask_var_t cpumask)
> >> +{
> >> struct workqueue_struct *wq;
> >> int ret;
> >>
> >> @@ -3343,15 +3393,9 @@ static int wq_anon_cpumask_set(cpumask_var_t cpumask)
> >> continue;
> >> /* Ordered workqueues need specific treatment */
> >> if (wq->flags & __WQ_ORDERED)
> >> - continue;
> >> -
> >> - attrs = wq_sysfs_prep_attrs(wq);
> >> - if (!attrs)
> >> - return -ENOMEM;
> >> -
> >> - cpumask_copy(attrs->cpumask, cpumask);
> >> - ret = apply_workqueue_attrs(wq, attrs);
> >> - free_workqueue_attrs(attrs);
> >> + ret = wq_ordered_cpumask_set(wq, cpumask);
>
>
>
> After the pool is split. wq_ordered_cpumask_set() should only be called once.
> once is enough for all ordered wqs.
True. I was just worried that the implementation changes in the future and some
ordered unbound workqueues get their own private pool in case we have problems
with works of a workqueue blocking works of another workqueue while they wait on
heavy resources.
The workqueue subsystem usually deals with that by launching new workers on demand
to handle waiting works in the queue. But it doesn't seem to apply on ordered unbound
workqueues because they have max_active=1, so they can't spawn another worker if there
is a work blocking the others. So I wouldn't be surprised if such an issue is resolved
by using private pools on some ordered workqueues.
IIUC.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/