WARNING: CPU: 0 PID: 1 at fs/block_dev.c:58 __blkdev_put+0x16f/0x1b0()
From: Richard Weinberger
Date: Sat Nov 07 2015 - 07:07:47 EST
Hi!
While playing with the Linux kernel library I run into this warning.
Root cause of the warning is that the backing file on the host
side is locked an therefore all IO on it results into -EIO.
The question is whether ext4/jbd2 should abort earlier or not.
The WARN_ON_ONCE in bdev_write_inode() triggers as jbd2's do_one_pass()
sets up a dirty buffer_head and fails later.
Then ext4 gives up, calls blkdev_put() and we reach bdev_write_inode()
with a dirty ->bd_inode.
What do you think?
[ 0.128000] blk_update_request: I/O error, dev vda, sector 0
[ 0.128000] Buffer I/O error on dev vda, logical block 0, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 8
[ 0.128000] Buffer I/O error on dev vda, logical block 1, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 528
[ 0.128000] Buffer I/O error on dev vda, logical block 66, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 1824
[ 0.128000] Buffer I/O error on dev vda, logical block 228, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 1864
[ 0.128000] Buffer I/O error on dev vda, logical block 233, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 41840
[ 0.128000] Buffer I/O error on dev vda, logical block 5230, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 262672
[ 0.128000] Buffer I/O error on dev vda, logical block 32834, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 262680
[ 0.128000] Buffer I/O error on dev vda, logical block 32835, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 262752
[ 0.128000] Buffer I/O error on dev vda, logical block 32844, lost async page write
[ 0.128000] blk_update_request: I/O error, dev vda, sector 262792
[ 0.128000] Buffer I/O error on dev vda, logical block 32849, lost async page write
[ 0.132000] JBD2: recovery failed
[ 0.132000] EXT4-fs (vda): error loading journal
[ 0.132000] ------------[ cut here ]------------
[ 0.132000] WARNING: CPU: 0 PID: 1 at fs/block_dev.c:58 __blkdev_put+0x16f/0x1b0()
[ 0.132000] Call Trace:
[ 0.132000] 00007fd99262db68: [<0040673d>] warn_slowpath_common+0x7d/0xb0
[ 0.132000] 00007fd99262dba8: [<004068d5>] warn_slowpath_null+0x15/0x20
[ 0.132000] 00007fd99262dbb8: [<0048efbf>] __blkdev_put+0x16f/0x1b0
[ 0.132000] 00007fd99262dbf8: [<0048fcd2>] blkdev_put+0x92/0x110
[ 0.132000] 00007fd99262dc28: [<00462faf>] kill_block_super+0x3f/0x70
[ 0.132000] 00007fd99262dc48: [<00461bf2>] deactivate_locked_super+0x52/0xb0
[ 0.132000] 00007fd99262dc78: [<00462f51>] mount_bdev+0x1b1/0x1d0
[ 0.132000] 00007fd99262dc80: [<004c5c40>] ext4_fill_super+0x0/0x32f0
[ 0.132000] 00007fd99262dce8: [<004c16f0>] ext4_mount+0x10/0x20
[ 0.132000] 00007fd99262dcf8: [<00463164>] mount_fs+0x14/0xc0
[ 0.132000] 00007fd99262dd28: [<0047a38e>] vfs_kern_mount+0x5e/0x120
[ 0.132000] 00007fd99262dd78: [<0047baa8>] do_mount+0x208/0xd90
[ 0.132000] 00007fd99262dd88: [<0045d7db>] __kmalloc_track_caller+0x18b/0x1c0
[ 0.132000] 00007fd99262dd98: [<00450618>] strndup_user+0x48/0x60
[ 0.132000] 00007fd99262de08: [<00450618>] strndup_user+0x48/0x60
[ 0.132000] 00007fd99262de48: [<0047c99a>] SyS_mount+0x6a/0xb0
[ 0.132000] 00007fd99262de98: [<00403b89>] run_syscalls+0xd9/0x1c0
[ 0.132000] 00007fd99262deb0: [<00425970>] autoremove_wake_function+0x0/0x40
[ 0.132000] 00007fd99262dec8: [<005535f0>] kernel_init+0x0/0xf0
[ 0.132000] 00007fd99262dee8: [<005535f0>] kernel_init+0x0/0xf0
[ 0.132000] 00007fd99262df08: [<00402f2d>] run_init_process+0x1d/0x40
[ 0.132000] 00007fd99262df18: [<00553626>] kernel_init+0x36/0xf0
[ 0.132000] 00007fd99262df28: [<00403158>] thread_bootstrap+0x48/0x50
[ 0.132000]
[ 0.132000] ---[ end trace 0b886a9a044a4995 ]---
Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/