Re: [PATCH -rc] workqueue: Reimplement UAF fix to avoid lockdep worning
From: Leon Romanovsky
Date: Thu Jun 06 2024 - 03:38:25 EST
On Wed, Jun 05, 2024 at 07:10:55PM +0800, Hillf Danton wrote:
> On Tue, 4 Jun 2024 21:58:04 +0300 Leon Romanovsky <leon@xxxxxxxxxx>
> > On Tue, Jun 04, 2024 at 06:30:49AM -1000, Tejun Heo wrote:
> > > On Tue, Jun 04, 2024 at 02:38:34PM +0300, Leon Romanovsky wrote:
> > > > Thanks, it is very rare situation where call to flush/drain queue
> > > > (in our case kthread_flush_worker) in the middle of the allocation
> > > > flow can be correct. I can't remember any such case.
> > > >
> > > > So even we don't fully understand the root cause, the reimplementation
> > > > is still valid and improves existing code.
> > >
> > > It's not valid. pwq release is async and while wq free in the error path
> > > isn't. The flush is there so that we finish the async part before
> > > synchronize error handling. The patch you posted will can lead to double
> > > free after a pwq allocation failure. We can make the error path synchronous
> > > but the pwq free path should be updated first so that it stays synchronous
> > > in the error path. Note that it *needs* to be asynchronous in non-error
> > > paths, so it's going to be a bit subtle one way or the other.
> >
> > But at that point, we didn't add newly created WQ to any list which will execute
> > that asynchronous release. Did I miss something?
> >
> Maybe it is more subtle than thought, but not difficult to make the wq
> allocation path sync. See if the patch could survive your test.
Thanks, I started to run our tests with Dan's revert.
https://lore.kernel.org/all/171711745834.1628941.5259278474013108507.stgit@xxxxxxxxxxxxxxxxxxxxxxxxx/
As premature results, it fixed my lockdep warnings, but it will take time till I get full confidence.
If not, I will try your patch.
Thanks
>
> --- x/include/linux/workqueue.h
> +++ y/include/linux/workqueue.h
> @@ -402,6 +402,7 @@ enum wq_flags {
> */
> WQ_POWER_EFFICIENT = 1 << 7,
>
> + __WQ_INITIALIZING = 1 << 14, /* internal: workqueue is initializing */
> __WQ_DESTROYING = 1 << 15, /* internal: workqueue is destroying */
> __WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
> __WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
> --- x/kernel/workqueue.c
> +++ y/kernel/workqueue.c
> @@ -5080,6 +5080,8 @@ static void pwq_release_workfn(struct kt
> * is gonna access it anymore. Schedule RCU free.
> */
> if (is_last) {
> + if (wq->flags & __WQ_INITIALIZING)
> + return;
> wq_unregister_lockdep(wq);
> call_rcu(&wq->rcu, rcu_free_wq);
> }
> @@ -5714,8 +5716,10 @@ struct workqueue_struct *alloc_workqueue
> goto err_unreg_lockdep;
> }
>
> + wq->flags |= __WQ_INITIALIZING;
> if (alloc_and_link_pwqs(wq) < 0)
> goto err_free_node_nr_active;
> + wq->flags &= ~__WQ_INITIALIZING;
>
> if (wq_online && init_rescuer(wq) < 0)
> goto err_destroy;
> --
>