Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size

From: Jaegeuk Kim
Date: Tue May 05 2020 - 19:06:02 EST


On 05/05, Chao Yu wrote:
> On 2020-5-4 22:30, Jaegeuk Kim wrote:
> > From: Daeho Jeong <daehojeong@xxxxxxxxxx>
> >
> > Current zstd compression buffer size is one page and header size less
> > than cluster size. By this, zstd compression always succeeds even if
> > the real compression data is failed to fit into the buffer size, and
> > eventually reading the cluster returns I/O error with the corrupted
> > compression data.
>
> What's the root cause of this issue? I didn't get it.
>
> >
> > Signed-off-by: Daeho Jeong <daehojeong@xxxxxxxxxx>
> > ---
> > fs/f2fs/compress.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> > index 4c7eaeee52336..a9fa8049b295f 100644
> > --- a/fs/f2fs/compress.c
> > +++ b/fs/f2fs/compress.c
> > @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
> > cc->private = workspace;
> > cc->private2 = stream;
> >
> > - cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> > + cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
>
> In my machine, the value is 66572 which is much larger than size of dst
> buffer, so, in where we can tell the real size of dst buffer to zstd
> compressor? Otherwise, if compressed data size is larger than dst buffer
> size, when we flush compressed data into dst buffer, we may suffer overflow.

Could you give it a try compress_log_size=2 and check decompression works?

>
> > return 0;
> > }
> >
> > @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
> > ZSTD_inBuffer inbuf;
> > ZSTD_outBuffer outbuf;
> > int src_size = cc->rlen;
> > - int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> > + int dst_size = cc->clen;
> > int ret;
> >
> > inbuf.pos = 0;
> >