Re: [PATCH 2/3] f2fs: schedule in between two continous batch discards

From: Chao Yu
Date: Thu Aug 25 2016 - 05:22:54 EST


Hi Jaegeuk,

On 2016/8/24 0:53, Jaegeuk Kim wrote:
> Hi Chao,
>
> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
>> From: Chao Yu <yuchao0@xxxxxxxxxx>
>>
>> In batch discard approach of fstrim will grab/release gc_mutex lock
>> repeatly, it makes contention of the lock becoming more intensive.
>>
>> So after one batch discards were issued in checkpoint and the lock
>> was released, it's better to do schedule() to increase opportunity
>> of grabbing gc_mutex lock for other competitors.
>>
>> Signed-off-by: Chao Yu <yuchao0@xxxxxxxxxx>
>> ---
>> fs/f2fs/segment.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 020767c..d0f74eb 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>> mutex_unlock(&sbi->gc_mutex);
>> if (err)
>> break;
>> +
>> + schedule();
>
> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
> In order to avoid long latency, wouldn't it be enough to reduce the batch size?

Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
batch trim, so the popped locker and fstrimer will make a new competition in
gc_mutex. If fstrimer is running in a big core, and popped locker is running in
a small core, we can't guarantee popped locker can win the race, and for the
most of time, fstrimer will win. So in order to reduce starvation of other
gc_mutext locker, it's better to do schedule() here.

Thanks,

>
> Thanks,
>
>> }
>> out:
>> range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
>> --
>> 2.7.2
>
> .
>