Re: USB device cannot be reconnected and khubd "blocked for more than120 seconds"

From: Linus Torvalds
Date: Mon Jan 14 2013 - 13:35:05 EST


On Mon, Jan 14, 2013 at 10:04 AM, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> wrote:
>
> How about skipping that call if the current thread is one of the async
> helpers? Is it possible to detect when that happens?
>
> Or maybe such a check should go inside async_synchronize_full() itself.

Do we have some idea of exactly what is waiting for what? Which async
context is causing the module load to happen in the first place?

I think *that* is what we should avoid - it sounds like the block
layer is loading the IO scheduler at the wrong point. I realize that
people like (for testing purposes) to change the IO scheduler at
random, but if that means that any IO can basically result in a
request_module(), then that sounds like a problem.

It seems to be "elevator_get()", and I presume the chain is something
like "load block driver async, the block driver does
blk_init_allocated_queue, that does request_module() to find the
elevator, the request_module() succeeds, but ends up waiting for async
work, which is the block driver load, which is waiting for the
request_module to finish".

And my gut feel is that blk_init_allocated_queue() probably shouldn't
do that request_module() at all. We migth want to do it when we *open*
the device, but not while loading the module for the device.

So my _feeling_ is that this is just a bug in the block layer, and
that it shouldn't set up block device drivers for this kind of crazy
"need to load the elevator module while in the middle of scanning
devices". I think *that* is what we should aim to change.

Hmm?

That said, I think it might indeed be a good idea to make this problem
much easier to see, and that "detect when it happens" would be a good
thing (and then we should WARN_ON_ONCE() on people trying to do
request_module() calls from async context).

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/