On Tue, Apr 18 2000, Steve Dodd wrote:
> > The code you mentioned simply restricts the loop device to use only
> > half of the available requests. _All_ of the other requests are
> > available to anyone wanting to complete an operation. Any single
> > loop request can use as many other requests as it wants: if the bit
> > of the queue reserved for loop requests is full, it will just end
> > up using a shorter remaining queue for its own requests.
> [..]
>
> OK, Jens has confirmed that his multiple freelist stuff /doesn't/ fix the
> loopdev deadlocks. However, I'm not sure there isn't a really extreme
> condition that could cause this. do_lo_request can easily sleep in fs
> code, and AFAICS nothing stops tq_disk getting re-run while it is sleeping.
> So running tq_disk is not guaranteed to make progress in freeing requests
> (it may even make things worse) before it is called again, if do_lo_req is
> on tq_disk..
tq_disk is very likely to run again while do_lo_request is running. Follow
the path to grab_cache_page() -- if the page in question is locked,
__find_lock_page ends up rerunning tq_disk again. As far as I can see,
this is what is causing the problems.
> Would it make any sense to get loop.o doing all its I/O asynchronously, so
> it never has to sleep in the request fn?
I did that yesterday. Loop transfers all requests in do_lo_request and
wakes a sleeping thread which then does the work. It seemed like a bit
overkill, but it may be the only way to get around this problem.
-- * Jens Axboe <axboe@suse.de> * Linux CD/DVD-ROM, SuSE Labs * http://kernel.dk- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 23 2000 - 21:00:13 EST