Re: [PATCH] f2fs: introduce nid cache
From: Chao Yu
Date: Wed Feb 08 2017 - 10:25:54 EST
On 2017/2/7 15:24, Chao Yu wrote:
> Hi Jaegeuk,
>
> Happy Chinese New Year! :)
>
> On 2017/1/24 12:35, Jaegeuk Kim wrote:
>> Hi Chao,
>>
>> On 01/22, Chao Yu wrote:
>>> In scenario of intensively node allocation, free nids will be ran out
>>> soon, then it needs to stop to load free nids by traversing NAT blocks,
>>> in worse case, if NAT blocks does not be cached in memory, it generates
>>> IOs which slows down our foreground operations.
>>>
>>> In order to speed up node allocation, in this patch we introduce a new
>>> option named "nid cache", when turns on this option, it will load all
>>> nat entries in NAT blocks when doing mount, and organize all free nids
>>> in a bitmap, for any operations related to free nid, we will query and
>>> set the new prebuilded bitmap instead of reading and lookuping NAT
>>> blocks, so performance of node allocation can be improved.
>>>
>>
>> How does this affect mount time and memory consumption?
>
> Sorry for the delay.
>
> Let me figure out some numbers later.
a. mount time
I choose slow device (Kingston 16GB SD card) to see how this option affect mount
time when there is not enough bandwidth in low level,
Before the test, I change readahead window size of NAT pages from FREE_NID_PAGES
* 8 to sbi->blocks_per_seg for better ra performance, so the result is:
time mount -t f2fs -o nid_cache /dev/sde /mnt/f2fs/
before:
real 0m0.204s
user 0m0.004s
sys 0m0.020s
after:
real 0m3.792s
user 0m0.000s
sys 0m0.140s
b. memory consumption
For 16GB size image, there is total 34 NAT pages, so memory footprint is:
34 / 2 * 512 * 455 / 8 = 495040 bytes = 483.4 KB
Increasing of memory footprint is liner with total user valid blocks in image,
and at most it will eat 3900 * 8 * 455 / 8 = 1774500 bytes = 1732.9 KB
Thanks,
>
>> IMO, if those do not
>> raise huge concerns, we would be able to consider just replacing current free
>> nid list with this bitmap.
>
> Yup, I agree with you.
>
> Thanks,
>