Re: [PATCH] ubifs: Add support for zstd compression.
From: Emil Lenngren
Date: Fri Jun 07 2019 - 19:45:14 EST
Hi,
Den fre 7 juni 2019 kl 22:49 skrev Richard Weinberger <richard@xxxxxx>:
>
> ----- UrsprÃngliche Mail -----
> > Von: "Emil Lenngren" <emil.lenngren@xxxxxxxxx>
> > An: "richard" <richard@xxxxxx>
> > CC: "linux-mtd" <linux-mtd@xxxxxxxxxxxxxxxxxxx>, "Sebastian Andrzej Siewior" <sebastian@xxxxxxxxxxxxx>, "linux-kernel"
> > <linux-kernel@xxxxxxxxxxxxxxx>, "Michele Dionisio" <michele.dionisio@xxxxxxxxx>
> > Gesendet: Freitag, 7. Juni 2019 22:27:09
> > Betreff: Re: [PATCH] ubifs: Add support for zstd compression.
> >> So I'm not sure what is the best choice for the default filesystem.
> >
> > My idea was at the end, i.e. it will only be used when LZO and ZLIB
> > are not selected to be included for UBIFS, for example when someone
> > compiles a minimal kernel who knows exactly which compression
> > algorithms will be used on that system.
>
> BTW: you can always select the compressor using the compr= mount option.
> Also for the default filesystem.
Yep that's what I'm using while I'm testing.
> Putting it at the end does not harm but IMHO the use is little.
> But for the sake of completes, I agree with you. Can you send a follow-up
> patch?
Ok
>
> > I did a single test today and compared lzo and zstd and on that test
> > lzo had faster decompression speed but resulted in larger space. I'll
> > do more tests later.
>
> Can you please share more details? I'm interested what CPU this was.
ARM Cortex-A7. The kernel is compiled with gcc 7.3.1. Next week I'll
test some more.
I have a question about how the decompression is done while reading.
When a large file is read from the filesystem (assuming not in any
cache), is it the case that first a chunk is read from flash, that
chunk is then decompressed, later next chunk is read from flash, that
one is then decompressed and so on, or can the decompression be done
in parallel while reading the next chunk from flash?
/Emil