Re: [BUGFIX][PATCH 1/4] memcg: fix limit estimation at reclaim for hugepage
From: Minchan Kim
Date: Sat Jan 29 2011 - 21:26:23 EST
On Fri, Jan 28, 2011 at 5:36 PM, KAMEZAWA Hiroyuki
> On Fri, 28 Jan 2011 17:25:58 +0900
> Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
>> Hi Hannes,
>> On Fri, Jan 28, 2011 at 5:17 PM, Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>> > On Fri, Jan 28, 2011 at 05:04:16PM +0900, Minchan Kim wrote:
>> >> Hi Kame,
>> >> On Fri, Jan 28, 2011 at 1:58 PM, KAMEZAWA Hiroyuki
>> >> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>> >> > How about this ?
>> >> > ==
>> >> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
>> >> >
>> >> > Current memory cgroup's code tends to assume page_size == PAGE_SIZE
>> >> > and arrangement for THP is not enough yet.
>> >> >
>> >> > This is one of fixes for supporing THP. This adds
>> >> > mem_cgroup_check_margin() and checks whether there are required amount of
>> >> > free resource after memory reclaim. By this, THP page allocation
>> >> > can know whether it really succeeded or not and avoid infinite-loop
>> >> > and hangup.
>> >> >
>> >> > Total fixes for do_charge()/reclaim memory will follow this patch.
>> >> If this patch is only related to THP, I think patch order isn't good.
>> >> Before applying [2/4], huge page allocation will retry without
>> >> reclaiming and loop forever by below part.
>> >> @@ -1854,9 +1858,6 @@ static int __mem_cgroup_do_charge(struct
>> >> Â Â Â } else
>> >> Â Â Â Â Â Â Â mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
>> >> - Â Â if (csize > PAGE_SIZE) /* change csize and retry */
>> >> - Â Â Â Â Â Â return CHARGE_RETRY;
>> >> -
>> >> Â Â Â if (!(gfp_mask & __GFP_WAIT))
>> >> Â Â Â Â Â Â Â return CHARGE_WOULDBLOCK;
>> >> Am I missing something?
>> > No, you are correct. ÂBut I am not sure the order really matters in
>> > theory: you have two endless loops that need independent fixing.
>> That's why I ask a question.
>> Two endless loop?
>> One is what I mentioned. The other is what?
>> Maybe this patch solve the other.
>> But I can't guess it by only this description. Stupid..
>> Please open my eyes.
> One is.
> Âif (csize > PAGE_SIZE)
> Â Â Â Âreturn CHARGE_RETRY;
> By this, reclaim will never be called.
> Another is a check after memory reclaim.
> Â Â Â ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âgfp_mask, flags);
> Â Â Â Â/*
> Â Â Â Â * try_to_free_mem_cgroup_pages() might not give us a full
> Â Â Â Â * picture of reclaim. Some pages are reclaimed and might be
> Â Â Â Â * moved to swap cache or just unmapped from the cgroup.
> Â Â Â Â * Check the limit again to see if the reclaim reduced the
> Â Â Â Â * current usage of the cgroup before giving up
> Â Â Â Â */
> Â Â Â Âif (ret || mem_cgroup_check_under_limit(mem_over_limit))
> Â Â Â Â Â Â Â Âreturn CHARGE_RETRY;
> ret != 0 if one page is reclaimed. Then, khupaged will retry charge and
> cannot get enough room, reclaim, one page -> again. SO, in busy memcg,
> HPAGE_SIZE allocation never fails.
> Even if khupaged luckly allocates HPAGE_SIZE, because khugepaged walks vmas
> one by one and try to collapse each pmd, under mmap_sem(), this seems a hang by
> khugepaged, infinite loop.
Kame, Hannes, Thanks.
I understood yours opinion. :)
As I said earlier, at least, it can help patch review.
When I saw only [1/4] firstly, I felt it doesn't affect anything since
THP allocation would return earlier before reaching the your patch so
infinite loop still happens.
Of course, when we apply [2/4], the problem will be gone.
But I can't know the fact until I read [2/4]. It makes reviewers confuse.
So I suggest [2/4] is ahead of [1/4] and includes following as in [2/4].
"This patch still has a infinite problem in case of xxxx. Next patch solves it"
I hope if it doesn't have a problem on bisect, patch order would be
changed if you don't mind. When I review Hannes's version, it's same.
I will review again when Hannes resends the series.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/