Re: [patch] memcg: prevent endless loop with huge pages andnear-limit group

From: KAMEZAWA Hiroyuki
Date: Thu Jan 27 2011 - 18:46:28 EST


On Thu, 27 Jan 2011 11:40:14 +0100
Johannes Weiner <hannes@xxxxxxxxxxx> wrote:

> This is a patch I sent to Andrea ages ago in response to a RHEL
> bugzilla. Not sure why it did not reach mainline... But it fixes one
> issue you described in 4/7, namely looping around a not exceeded limit
> with a huge page that won't fit anymore.
>
> ---
> From: Johannes Weiner <hannes@xxxxxxxxxxx>
> Subject: [patch] memcg: prevent endless loop with huge pages and near-limit group
>
> If reclaim after a failed charging was unsuccessful, the limits are
> checked again, just in case they settled by means of other tasks.
>
> This is all fine as long as every charge is of size PAGE_SIZE, because
> in that case, being below the limit means having at least PAGE_SIZE
> bytes available.
>
> But with transparent huge pages, we may end up in an endless loop
> where charging and reclaim fail, but we keep going because the limits
> are not yet exceeded, although not allowing for a huge page.
>
> Fix this up by explicitely checking for enough room, not just whether
> we are within limits.
>
> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>


Okay, seems to have the same concept as mine.
-Kame

> ---
> include/linux/res_counter.h | 12 ++++++++++++
> mm/memcontrol.c | 20 +++++++++++---------
> 2 files changed, 23 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h
> index fcb9884..03212e4 100644
> --- a/include/linux/res_counter.h
> +++ b/include/linux/res_counter.h
> @@ -182,6 +182,18 @@ static inline bool res_counter_check_under_limit(struct res_counter *cnt)
> return ret;
> }
>
> +static inline bool res_counter_check_room(struct res_counter *cnt,
> + unsigned long room)
> +{
> + bool ret;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&cnt->lock, flags);
> + ret = cnt->limit - cnt->usage >= room;
> + spin_unlock_irqrestore(&cnt->lock, flags);
> + return ret;
> +}
> +
> static inline bool res_counter_check_under_soft_limit(struct res_counter *cnt)
> {
> bool ret;
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d572102..8fa4be3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1111,6 +1111,15 @@ static bool mem_cgroup_check_under_limit(struct mem_cgroup *mem)
> return false;
> }
>
> +static bool mem_cgroup_check_room(struct mem_cgroup *mem, unsigned long room)
> +{
> + if (!res_counter_check_room(&mem->res, room))
> + return false;
> + if (!do_swap_account)
> + return true;
> + return res_counter_check_room(&mem->memsw, room);
> +}
> +
> static unsigned int get_swappiness(struct mem_cgroup *memcg)
> {
> struct cgroup *cgrp = memcg->css.cgroup;
> @@ -1844,16 +1853,9 @@ static int __mem_cgroup_do_charge(struct mem_cgroup *mem, gfp_t gfp_mask,
> if (!(gfp_mask & __GFP_WAIT))
> return CHARGE_WOULDBLOCK;
>
> - ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
> + mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
> gfp_mask, flags);
> - /*
> - * try_to_free_mem_cgroup_pages() might not give us a full
> - * picture of reclaim. Some pages are reclaimed and might be
> - * moved to swap cache or just unmapped from the cgroup.
> - * Check the limit again to see if the reclaim reduced the
> - * current usage of the cgroup before giving up
> - */
> - if (ret || mem_cgroup_check_under_limit(mem_over_limit))
> + if (mem_cgroup_check_room(mem_over_limit, csize))
> return CHARGE_RETRY;
>
> /*
> --
> 1.7.3.5
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/