[PATCH 4.9 152/175] btrfs: scrub: Dont use inode page cache in scrub_handle_errored_block()
From: Greg Kroah-Hartman
Date: Mon Dec 28 2020 - 08:06:55 EST
From: Qu Wenruo <wqu@xxxxxxxx>
commit 665d4953cde6d9e75c62a07ec8f4f8fd7d396ade upstream
In commit ac0b4145d662 ("btrfs: scrub: Don't use inode pages for device
replace") we removed the branch of copy_nocow_pages() to avoid
corruption for compressed nodatasum extents.
However above commit only solves the problem in scrub_extent(), if
during scrub_pages() we failed to read some pages,
sctx->no_io_error_seen will be non-zero and we go to fixup function
scrub_handle_errored_block().
In scrub_handle_errored_block(), for sctx without csum (no matter if
we're doing replace or scrub) we go to scrub_fixup_nodatasum() routine,
which does the similar thing with copy_nocow_pages(), but does it
without the extra check in copy_nocow_pages() routine.
So for test cases like btrfs/100, where we emulate read errors during
replace/scrub, we could corrupt compressed extent data again.
This patch will fix it just by avoiding any "optimization" for
nodatasum, just falls back to the normal fixup routine by try read from
any good copy.
This also solves WARN_ON() or dead lock caused by lame backref iteration
in scrub_fixup_nodatasum() routine.
The deadlock or WARN_ON() won't be triggered before commit ac0b4145d662
("btrfs: scrub: Don't use inode pages for device replace") since
copy_nocow_pages() have better locking and extra check for data extent,
and it's already doing the fixup work by try to read data from any good
copy, so it won't go scrub_fixup_nodatasum() anyway.
This patch disables the faulty code and will be removed completely in a
followup patch.
Fixes: ac0b4145d662 ("btrfs: scrub: Don't use inode pages for device replace")
Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
Signed-off-by: David Sterba <dsterba@xxxxxxxx>
[sudip: adjust context]
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@xxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
fs/btrfs/scrub.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -919,11 +919,6 @@ static int scrub_handle_errored_block(st
have_csum = sblock_to_check->pagev[0]->have_csum;
dev = sblock_to_check->pagev[0]->dev;
- if (sctx->is_dev_replace && !is_metadata && !have_csum) {
- sblocks_for_recheck = NULL;
- goto nodatasum_case;
- }
-
/*
* read all mirrors one after the other. This includes to
* re-read the extent or metadata block that failed (that was
@@ -1036,13 +1031,19 @@ static int scrub_handle_errored_block(st
goto out;
}
- if (!is_metadata && !have_csum) {
+ /*
+ * NOTE: Even for nodatasum case, it's still possible that it's a
+ * compressed data extent, thus scrub_fixup_nodatasum(), which write
+ * inode page cache onto disk, could cause serious data corruption.
+ *
+ * So here we could only read from disk, and hope our recovery could
+ * reach disk before the newer write.
+ */
+ if (0 && !is_metadata && !have_csum) {
struct scrub_fixup_nodatasum *fixup_nodatasum;
WARN_ON(sctx->is_dev_replace);
-nodatasum_case:
-
/*
* !is_metadata and !have_csum, this means that the data
* might not be COWed, that it might be modified