Re: [PATCH] fuse: increase FUSE_MAX_MAX_PAGES limit
From: Jingbo Xu
Date: Wed Jan 24 2024 - 07:48:04 EST
On 1/24/24 8:23 PM, Miklos Szeredi wrote:
> On Wed, 24 Jan 2024 at 08:05, Jingbo Xu <jefflexu@xxxxxxxxxxxxxxxxx> wrote:
>>
>> From: Xu Ji <laoji.jx@xxxxxxxxxxxxxxx>
>>
>> Increase FUSE_MAX_MAX_PAGES limit, so that the maximum data size of a
>> single request is increased.
>
> The only worry is about where this memory is getting accounted to.
> This needs to be thought through, since the we are increasing the
> possible memory that an unprivileged user is allowed to pin.
OK that will be an issue.
>
>
>
>>
>> This optimizes the write performance especially when the optimal IO size
>> of the backend store at the fuse daemon side is greater than the original
>> maximum request size (i.e. 1MB with 256 FUSE_MAX_MAX_PAGES and
>> 4096 PAGE_SIZE).
>>
>> Be noted that this only increases the upper limit of the maximum request
>> size, while the real maximum request size relies on the FUSE_INIT
>> negotiation with the fuse daemon.
>>
>> Signed-off-by: Xu Ji <laoji.jx@xxxxxxxxxxxxxxx>
>> Signed-off-by: Jingbo Xu <jefflexu@xxxxxxxxxxxxxxxxx>
>> ---
>> I'm not sure if 1024 is adequate for FUSE_MAX_MAX_PAGES, as the
>> Bytedance floks seems to had increased the maximum request size to 8M
>> and saw a ~20% performance boost.
>
> The 20% is against the 256 pages, I guess.
Yeah I guess so.
> It would be interesting to
> see the how the number of pages per request affects performance and
> why.
To be honest, I'm not sure the root cause of the performance boost in
bytedance's case.
While in our internal use scenario, the optimal IO size of the backend
store at the fuse server side is, e.g. 4MB, and thus if the maximum
throughput can not be achieved with current 256 pages per request. IOW
the backend store, e.g. a distributed parallel filesystem, get optimal
performance when the data is aligned at 4MB boundary. I can ask my folk
who implements the fuse server to give more background info and the
exact performance statistics.
Thanks.
--
Thanks,
Jingbo