Re: work item still be scheduled to execute after destroy_workqueue?
From: richard clark
Date: Tue Dec 06 2022 - 04:23:25 EST
On Tue, Dec 6, 2022 at 2:23 PM Lai Jiangshan <jiangshanlai@xxxxxxxxx> wrote:
>
> On Tue, Dec 6, 2022 at 12:35 PM richard clark
> <richard.xnu.clark@xxxxxxxxx> wrote:
>
> > >
> > A WARN is definitely reasonable and has its benefits. Can I try to
> > submit the patch and you're nice to review as maintainer?
> >
> > Thanks,
> > Richard
> > >
>
> Sure, go ahead.
>
> What's in my mind is that the following code is wrapped in a new function:
>
> mutex_lock(&wq->mutex);
> if (!wq->nr_drainers++)
> wq->flags |= __WQ_DRAINING;
> mutex_unlock(&wq->mutex);
>
>
> and the new function replaces the open code drain_workqueue() and
> is also called in destroy_workqueue() (before calling drain_workqueue()).
>
Except that, do we need to defer the __WQ_DRAINING clean to the
rcu_call, thus we still have a close-loop of the drainer's count, like
this?
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3528,6 +3526,9 @@ static void rcu_free_wq(struct rcu_head *rcu)
else
free_workqueue_attrs(wq->unbound_attrs);
+ if (!--wq->nr_drainers)
+ wq->flags &= ~__WQ_DRAINING;
+
kfree(wq);
>
> __WQ_DRAINING will cause the needed WARN on illegally queuing items on
> destroyed workqueue.
I will re-test it if there are no concerns about the above fix...
>
> Thanks
> Lai