Re: blk_throtl_exit taking q->queue_lock is problematic

From: Vivek Goyal
Date: Thu Feb 17 2011 - 15:00:24 EST


On Wed, Feb 16, 2011 at 06:31:14PM +1100, NeilBrown wrote:
>
>
> Hi,
>
> I recently discovered that blk_throtl_exit takes ->queue_lock when a blockdev
> is finally released.
>
> This is a problem for because by that time the queue_lock doesn't exist any
> more. It is in a separate data structure controlled by the RAID personality
> and by the time that the block device is being destroyed the raid personality
> has shutdown and the data structure containing the lock has been freed.

Hi Neil,

I am having a look at queue allocation in md and had few queries.

I was looking at md_alloc(), where we do

mddev->queue = blk_alloc_queue(GFP_KERNEL);
blk_queue_make_request(mddev->queue, md_make_request);

call to blk_queue_make_request() will make sure queue_lock is initiliazed
to internal __queue_lock.

Then I looked at raid0_run(), which is again setting the queue lock.

mddev->queue->queue_lock = &mddev->queue->__queue_lock;

I think this is redundant now as during md_alloc() we already did it.

Similar seems to be the case for linear.c and multipath.c

Following seem to be the cases where we overide the default lock.

raid1.c, raid5.c, raid10.c

I was going through the raid1.c, and I see that q->queue_lock has
been initialized to &conf->deivce_lock. Can we do the reverse. Introduce
spinlock pointer in conf and point it at queue->queue_lock? Anyway you
mentioned that personality's data structure are freed before request
queue is cleaned up, so there should not be any lifetime issues.

Also I was wondering how does it help sharing the lock between request
queue and some other data structures in driver.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/