Re: [PATCH 4/6] staging: erofs: cleanup `z_erofs_vle_normalaccess_readpages'

From: Gao Xiang
Date: Wed Sep 19 2018 - 11:32:25 EST


Hi Chao,

On 2018/9/19 23:26, Chao Yu wrote:
> Hi Xiang,
>
> On 2018/9/19 13:49, Gao Xiang wrote:
>> This patch introduces `__should_decompress_synchronously' to
>> cleanup `z_erofs_vle_normalaccess_readpages'.
>>
>> Signed-off-by: Gao Xiang <gaoxiang25@xxxxxxxxxx>
>> ---
>> drivers/staging/erofs/internal.h | 11 +++++++++++
>> drivers/staging/erofs/super.c | 5 +++++
>> drivers/staging/erofs/unzip_vle.c | 20 ++++++--------------
>> 3 files changed, 22 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
>> index cfcc6db..c84eb97 100644
>> --- a/drivers/staging/erofs/internal.h
>> +++ b/drivers/staging/erofs/internal.h
>> @@ -95,6 +95,9 @@ struct erofs_sb_info {
>> /* the dedicated workstation for compression */
>> struct radix_tree_root workstn_tree;
>>
>> + /* threshold for decompression synchronously */
>> + unsigned int max_sync_decompress_pages;
>> +
>> #ifdef EROFS_FS_HAS_MANAGED_CACHE
>> struct inode *managed_cache;
>> #endif
>> @@ -273,6 +276,14 @@ extern int erofs_try_to_free_cached_page(struct address_space *mapping,
>> struct page *page);
>> #endif
>>
>> +#define DEFAULT_MAX_SYNC_DECOMPRESS_PAGES 4
>> +
>> +static inline bool __should_decompress_synchronously(struct erofs_sb_info *sbi,
>> + unsigned int nr)
>> +{
>> + return nr <= sbi->max_sync_decompress_pages;
> - nr_pages < 4 /* sync */);
>
> There is a little bit changed except cleanup, IIUC, would it be any difference
> of performance around boundary of four pages?

No.. Once synchronous decompression is applied for 1,2,3 pages for no special reasons.
But I think it could be better to adjust it to the power of two --- 1,2,3,4 is preferred.
Since I have no idea to measure which is better or what value is best for all platform or use cases...

Therefore I tune it in this patch since I don't like the number
DEFAULT_MAX_SYNC_DECOMPRESS_PAGES == 3 ...

Thanks,
Gao Xiang

>
> Thanks,
>