Re: [PATCH v2] f2fs: fix sbi->extent_list corruption issue

From: Jaegeuk Kim
Date: Fri Jan 04 2019 - 15:33:39 EST


On 01/04, Sahitya Tummala wrote:
> On Mon, Nov 26, 2018 at 10:17:20AM +0530, Sahitya Tummala wrote:
> > When there is a failure in f2fs_fill_super() after/during
> > the recovery of fsync'd nodes, it frees the current sbi and
> > retries again. This time the mount is successful, but the files
> > that got recovered before retry, still holds the extent tree,
> > whose extent nodes list is corrupted since sbi and sbi->extent_list
> > is freed up. The list_del corruption issue is observed when the
> > file system is getting unmounted and when those recoverd files extent
> > node is being freed up in the below context.
> >
> > list_del corruption. prev->next should be fffffff1e1ef5480, but was (null)
> > <...>
> > kernel BUG at kernel/msm-4.14/lib/list_debug.c:53!
> > task: fffffff1f46f2280 task.stack: ffffff8008068000
> > lr : __list_del_entry_valid+0x94/0xb4
> > pc : __list_del_entry_valid+0x94/0xb4
> > <...>
> > Call trace:
> > __list_del_entry_valid+0x94/0xb4
> > __release_extent_node+0xb0/0x114
> > __free_extent_tree+0x58/0x7c
> > f2fs_shrink_extent_tree+0xdc/0x3b0
> > f2fs_leave_shrinker+0x28/0x7c
> > f2fs_put_super+0xfc/0x1e0
> > generic_shutdown_super+0x70/0xf4
> > kill_block_super+0x2c/0x5c
> > kill_f2fs_super+0x44/0x50
> > deactivate_locked_super+0x60/0x8c
> > deactivate_super+0x68/0x74
> > cleanup_mnt+0x40/0x78
> > __cleanup_mnt+0x1c/0x28
> > task_work_run+0x48/0xd0
> > do_notify_resume+0x678/0xe98
> > work_pending+0x8/0x14
> >
> > Fix this by cleaning up inodes, extent tree and nodes of those
> > recovered files before freeing up sbi and before next retry.
> >
> Hi Jaegeuk, Chao,
>
> I have observed another scenario where the similar list corruption issue
> can happen with sbi->inode_list as well. If recover_fsync_data()
> fails at some point in write_checkpoint() due to some error and if
> those recovered inodes are still dirty, then after the mount is
> successful, this issue is observed when that dirty inode is under
> writeback.

recover_fsync_data() does iget/iput in pair, and destroy_fsync_dnodes() drops
its dirty list and call iput(), when there is an error. So, after then, there'd
be no dirty inodes. If there's no error, checkpoint() flushes quota/dentry pages
in dirty inodes as well. Can we check where this dirty inode came from?

Oh, one sceanrio can be an error by f2fs_disable_checkpoint() which will do GC.

>
> [ 90.400500] list_del corruption. prev->next should be ffffffed1f566208, but was (null)
> [ 90.675349] Call trace:
> [ 90.677869] __list_del_entry_valid+0x94/0xb4
> [ 90.682351] remove_dirty_inode+0xac/0x114
> [ 90.686563] __f2fs_write_data_pages+0x6a8/0x6c8
> [ 90.691302] f2fs_write_data_pages+0x40/0x4c
> [ 90.695695] do_writepages+0x80/0xf0
> [ 90.699372] __writeback_single_inode+0xdc/0x4ac
> [ 90.704113] writeback_sb_inodes+0x280/0x440
> [ 90.708501] wb_writeback+0x1b8/0x3d0
> [ 90.712267] wb_workfn+0x1a8/0x4d4
> [ 90.715765] process_one_work+0x1c0/0x3d4
> [ 90.719883] worker_thread+0x224/0x344
> [ 90.723739] kthread+0x120/0x130
> [ 90.727055] ret_from_fork+0x10/0x18
>
> I think it is better to cleanup those inodes completely before freeing sbi
> and before next retry as done in this patch. Would you like to re-consider
> this patch for this new issue?

The patch was merged in mainline already.
Could you take a look at this patch?