Re: [f2fs-dev] [PATCH 1/2] f2fs: do not select same victim right again

From: Jaegeuk Kim
Date: Wed Sep 18 2019 - 12:47:58 EST


On 09/18, Chao Yu wrote:
> On 2019/9/18 11:12, Jaegeuk Kim wrote:
> > On 09/18, Chao Yu wrote:
> >> On 2019/9/18 4:55, Jaegeuk Kim wrote:
> >>> On 09/17, Chao Yu wrote:
> >>>> On 2019/9/16 23:37, Jaegeuk Kim wrote:
> >>>>> On 09/16, Chao Yu wrote:
> >>>>>> On 2019/9/9 20:04, Jaegeuk Kim wrote:
> >>>>>>> On 09/09, Chao Yu wrote:
> >>>>>>>> On 2019/9/9 16:06, Jaegeuk Kim wrote:
> >>>>>>>>> On 09/09, Chao Yu wrote:
> >>>>>>>>>> On 2019/9/9 9:25, Jaegeuk Kim wrote:
> >>>>>>>>>>> GC must avoid select the same victim again.
> >>>>>>>>>>
> >>>>>>>>>> Blocks in previous victim will occupy addition free segment, I doubt after this
> >>>>>>>>>> change, FGGC may encounter out-of-free space issue more frequently.
> >>>>>>>>>
> >>>>>>>>> Hmm, actually this change seems wrong by sec_usage_check().
> >>>>>>>>> We may be able to avoid this only in the suspicious loop?
> >>>>>>>>>
> >>>>>>>>> ---
> >>>>>>>>> fs/f2fs/gc.c | 2 +-
> >>>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>>>>>>>
> >>>>>>>>> diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
> >>>>>>>>> index e88f98ddf396..5877bd729689 100644
> >>>>>>>>> --- a/fs/f2fs/gc.c
> >>>>>>>>> +++ b/fs/f2fs/gc.c
> >>>>>>>>> @@ -1326,7 +1326,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
> >>>>>>>>> round++;
> >>>>>>>>> }
> >>>>>>>>>
> >>>>>>>>> - if (gc_type == FG_GC)
> >>>>>>>>> + if (gc_type == FG_GC && seg_freed)
> >>>>>>>>
> >>>>>>>> That's original solution Sahitya provided to avoid infinite loop of GC, but I
> >>>>>>>> suggest to find the root cause first, then we added .invalid_segmap for that
> >>>>>>>> purpose.
> >>>>>>>
> >>>>>>> I've checked the Sahitya's patch. So, it seems the problem can happen due to
> >>>>>>> is_alive or atomic_file.
> >>>>>>
> >>>>>> For some conditions, this doesn't help, for example, two sections contain the
> >>>>>> same fewest valid blocks, it will cause to loop selecting them if it fails to
> >>>>>> migrate blocks.
> >>>>>>
> >>>>>> How about keeping it as it is to find potential bug.
> >>>>>
> >>>>> I think it'd be fine to merge this. Could you check the above scenario in more
> >>>>> detail?
> >>>>
> >>>> I haven't saw this in real scenario yet.
> >>>>
> >>>> What I mean is if there is a bug (maybe in is_alive()) failing us to GC on one
> >>>> section, when that bug happens in two candidates, there could be the same
> >>>> condition that GC will run into loop (select A, fail to migrate; select B, fail
> >>>> to migrate, select A...).
> >>>>
> >>>> But I guess the benefit of this change is, if FGGC fails to migrate block due to
> >>>> i_gc_rwsem race, selecting another section and later retrying previous one may
> >>>> avoid lock race, right?
> >>>
> >>> In any case, I think this can avoid potenial GC loop. At least to me, it'd be
> >>> quite risky, if we remain this just for debugging purpose only.
> >>
> >> Yup,
> >>
> >> One more concern is would this cur_victim_sec remain after FGGC? then BGGC/SSR
> >> will always skip the section cur_victim_sec points to.
> >
> > Then, we can get another loop before using it by BGGC/SSR.
>
> I guess I didn't catch your point, do you mean, if we reset it in the end of
> FGGC, we may encounter the loop during BGGC/SSR?

FGGC failed in a loop and last victim was remained in cur_victim_sec.
Next FGGC kicked in and did the same thing again. I don't expect BGGC/SSR
wants to select this victim much, since it will have CB policy.

>
> I meant:
>
> f2fs_gc()
> ...
>
> + if (gc_type == FG_GC)
> + sbi->cur_victim_sec = NULL_SEGNO;
>
> mutex_unlock(&sbi->gc_mutex);
>
> put_gc_inode(&gc_list);
> ...
>
> Thanks,
>
> >
> >>
> >> So could we reset cur_victim_sec in the end of FGGC?
> >>
> >> Thanks,
> >>
> >>>
> >>>>
> >>>> Thanks,
> >>>>
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>>> sbi->cur_victim_sec = NULL_SEGNO;
> >>>>>>>>>
> >>>>>>>>> if (sync)
> >>>>>>>>>
> >>>>>>> .
> >>>>>>>
> >>>>> .
> >>>>>
> >>> .
> >>>
> > .
> >