Re: [PATCH 2/2] scsi: don't use execute_in_process_context()

From: James Bottomley
Date: Thu Dec 16 2010 - 09:39:54 EST


On Wed, 2010-12-15 at 20:42 +0100, Tejun Heo wrote:
> On 12/15/2010 08:33 PM, James Bottomley wrote:
> > A single flush won't quite work. The target is a parent of the device,
> > both of which release methods have execute_in_process_context()
> > requirements. What can happen here is that the last put of the device
> > will release the target (from the function). If both are moved to
> > workqueues, a single flush could cause the execution of the device work,
> > which then queues up target work (and makes it still pending). A double
> > flush will solve this (because I think our nesting level doesn't go
> > beyond 2) but it's a bit ugly ...
>
> Yeap, that's an interesting point actually. I just sent the patch
> butn there is no explicit flush. It's implied by destroy_work() and
> it has been a bit bothering that destroy_work() could exit with
> pending works if execution of the current one produces more. I was
> pondering making destroy_workqueue() actually drain all the scheduled
> works and maybe trigger a warning if it seems to loop for too long.
>
> But, anyways, I don't think that's gonna happen here. If the last put
> hasn't been executed the module reference wouldn't be zero, so module
> unload can't initiate, right?

Wrong I'm afraid. There's a nasty two level complexity in module
references: Anything which takes an external reference (like open or
mount) does indeed take the module reference and prevent removal.
Anything that takes an internal reference doesn't ... we wait for all of
them to come back in the final removal of the bus type. The is to
prevent a module removal deadlock. The callbacks are internal
references, so we wait for them in module_exit() but don't block
module_exit() from being called ... meaning the double callback scenario
could be outstanding.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/