[PATCH] f2fs: fix retry logic in f2fs_write_cache_pages()
From: Sahitya Tummala
Date: Tue May 26 2020 - 22:26:32 EST
In case a compressed file is getting overwritten, the current retry
logic doesn't include the current page to be retried now as it sets
the new start index as 0 and new end index as writeback_index - 1.
This causes the corresponding cluster to be uncompressed and written
as normal pages without compression. Fix this by allowing writeback to
be retried for the current page as well (in case of compressed page
getting retried due to index mismatch with cluster index). So that
this cluster can be written compressed in case of overwrite.
Signed-off-by: Sahitya Tummala <stummala@xxxxxxxxxxxxxx>
---
fs/f2fs/data.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 4af5fcd..bfd1df4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3024,7 +3024,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
if ((!cycled && !done) || retry) {
cycled = 1;
index = 0;
- end = writeback_index - 1;
+ end = retry ? -1 : writeback_index - 1;
goto retry;
}
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.