[PATCH] f2fs: change maximum zstd compression buffer size
From: Jaegeuk Kim
Date: Mon May 04 2020 - 10:30:42 EST
From: Daeho Jeong <daehojeong@xxxxxxxxxx>
Current zstd compression buffer size is one page and header size less
than cluster size. By this, zstd compression always succeeds even if
the real compression data is failed to fit into the buffer size, and
eventually reading the cluster returns I/O error with the corrupted
compression data.
Signed-off-by: Daeho Jeong <daehojeong@xxxxxxxxxx>
---
fs/f2fs/compress.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 4c7eaeee52336..a9fa8049b295f 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
cc->private = workspace;
cc->private2 = stream;
- cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
return 0;
}
@@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
ZSTD_inBuffer inbuf;
ZSTD_outBuffer outbuf;
int src_size = cc->rlen;
- int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ int dst_size = cc->clen;
int ret;
inbuf.pos = 0;
--
2.26.2.526.g744177e7f7-goog