[PATCH 3.4 50/88] virtio-blk: Dont free ida when disk is in use
From: Greg Kroah-Hartman
Date: Mon Jun 09 2014 - 20:38:27 EST
3.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Graf <agraf@xxxxxxx>
commit f4953fe6c4aeada2d5cafd78aa97587a46d2d8f9 upstream.
When a file system is mounted on a virtio-blk disk, we then remove it
and then reattach it, the reattached disk gets the same disk name and
ids as the hot removed one.
This leads to very nasty effects - mostly rendering the newly attached
device completely unusable.
Trying what happens when I do the same thing with a USB device, I saw
that the sd node simply doesn't get free'd when a device gets forcefully
removed.
Imitate the same behavior for vd devices. This way broken vd devices
simply are never free'd and newly attached ones keep working just fine.
Signed-off-by: Alexander Graf <agraf@xxxxxxx>
Signed-off-by: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
Cc: Yijing Wang <wangyijing@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
drivers/block/virtio_blk.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -573,6 +573,7 @@ static void __devexit virtblk_remove(str
{
struct virtio_blk *vblk = vdev->priv;
int index = vblk->index;
+ int refc;
/* Prevent config work handler from accessing the device. */
mutex_lock(&vblk->config_lock);
@@ -587,11 +588,15 @@ static void __devexit virtblk_remove(str
flush_work(&vblk->config_work);
+ refc = atomic_read(&disk_to_dev(vblk->disk)->kobj.kref.refcount);
put_disk(vblk->disk);
mempool_destroy(vblk->pool);
vdev->config->del_vqs(vdev);
kfree(vblk);
- ida_simple_remove(&vd_index_ida, index);
+
+ /* Only free device id if we don't have any users */
+ if (refc == 1)
+ ida_simple_remove(&vd_index_ida, index);
}
#ifdef CONFIG_PM
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/