Re: [PATCH v3 RFC] mm/vmscan: more restrictive condition for retry of shrink_zones
From: Shakeel Butt
Date: Sun Mar 12 2017 - 16:20:44 EST
On Sun, Mar 12, 2017 at 4:06 AM, Yisheng Xie <ysxie@xxxxxxxxxxx> wrote:
> From: Yisheng Xie <xieyisheng1@xxxxxxxxxx>
>
> When we enter do_try_to_free_pages, the may_thrash is always clear, and
> it will retry shrink zones to tap cgroup's reserves memory by setting
> may_thrash when the former shrink_zones reclaim nothing.
>
> However, when memcg is disabled or on legacy hierarchy, it should not do
> this useless retry at all, for we do not have any cgroup's reserves
> memory to tap, and we have already done hard work but made no progress.
>
> To avoid this time costly and useless retrying, add a stub function
> mem_cgroup_thrashed() and return true when memcg is disabled or on
> legacy hierarchy.
>
> Signed-off-by: Yisheng Xie <xieyisheng1@xxxxxxxxxx>
> Suggested-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
Thanks.
Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
> ---
> v3:
> - rename function may_thrash() to mem_cgroup_thrashed() to avoid confusing.
>
> v2:
> - more restrictive condition for retry of shrink_zones (restricting
> cgroup_disabled=memory boot option and cgroup legacy hierarchy) - Shakeel
>
> - add a stub function may_thrash() to avoid compile error or warning.
>
> - rename subject from "donot retry shrink zones when memcg is disable"
> to "more restrictive condition for retry in do_try_to_free_pages"
>
> Any comment is more than welcome!
>
> Thanks
> Yisheng Xie
>
> mm/vmscan.c | 20 +++++++++++++++++++-
> 1 file changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bc8031e..a76475af 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -184,6 +184,19 @@ static bool sane_reclaim(struct scan_control *sc)
> #endif
> return false;
> }
> +
> +static bool mem_cgroup_thrashed(struct scan_control *sc)
> +{
> + /*
> + * When memcg is disabled or on legacy hierarchy, there is no cgroup
> + * reserves memory to tap. So fake it as thrashed.
> + */
> + if (!cgroup_subsys_enabled(memory_cgrp_subsys) ||
> + !cgroup_subsys_on_dfl(memory_cgrp_subsys))
> + return true;
> +
> + return sc->may_thrash;
> +}
> #else
> static bool global_reclaim(struct scan_control *sc)
> {
> @@ -194,6 +207,11 @@ static bool sane_reclaim(struct scan_control *sc)
> {
> return true;
> }
> +
> +static bool mem_cgroup_thrashed(struct scan_control *sc)
> +{
> + return true;
> +}
> #endif
>
> /*
> @@ -2808,7 +2826,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> return 1;
>
> /* Untapped cgroup reserves? Don't OOM, retry. */
> - if (!sc->may_thrash) {
> + if (!mem_cgroup_thrashed(sc)) {
> sc->priority = initial_priority;
> sc->may_thrash = 1;
> goto retry;
> --
> 1.9.1
>