Re: corruption causing crash in __queue_work
From: Mike Snitzer
Date: Fri Dec 11 2015 - 14:14:07 EST
On Fri, Dec 11 2015 at 1:00pm -0500,
Nikolay Borisov <n.borisov@xxxxxxxxxxxxxx> wrote:
> On Fri, Dec 11, 2015 at 7:08 PM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> >
> > Hmmm... No idea why it didn't show up in the debug log but the only
> > way a workqueue could be in the above state is either it got
> > explicitly destroyed or somehow pwq refcnting is messed up, in both
> > cases it should have shown up in the log.
> >
> > cc'ing dm people. Is there any chance dm-thinp could be using
> > workqueue after destroying it?
Not that I'm aware of. But never say never?
Plus I'd think we'd see other dm-thinp specific use-after-free issues
aside from the thin-pool's workqueue.
> In __pool_destroy in dm-thin.c I don't see a call to
> cancel_delayed_work before destroying the workqueue. Is it possible
> that this is the causeI
Cannot see how, __pool_destroy()'s destroy_workqueue() would spew a
bunch of WARN_ONs (and the wq wouldn't be destroyed) if the workqueue
had outstanding work.
__pool_destroy() is called once the thin-pool's ref count drops to 0
(see __pool_dec which is called when the thin-pool is removed --
e.g. with 'dmsetup remove'). This code is only reachable when nothing
else is using the thin-pool.
And the thin-pool is only able to be removed if all thin devices that
depend on it have first been removed. And each individual thin device
waits for all outstanding IO before they can be removed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/