çå: çå: [PATCH] mm/memcontrol.c: speed up to force empty a memory cgroup
From: Li,Rongqing
Date: Thu Mar 22 2018 - 22:59:02 EST
> -----éäåä-----
> åää: linux-kernel-owner@xxxxxxxxxxxxxxx
> [mailto:linux-kernel-owner@xxxxxxxxxxxxxxx] äè Li,Rongqing
> åéæé: 2018å3æ19æ 18:52
> æää: Michal Hocko <mhocko@xxxxxxxxxx>
> æé: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> cgroups@xxxxxxxxxxxxxxx; hannes@xxxxxxxxxxx; Andrey Ryabinin
> <aryabinin@xxxxxxxxxxxxx>
> äé: çå: çå: [PATCH] mm/memcontrol.c: speed up to force empty a
> memory cgroup
>
>
>
> > -----éäåä-----
> > åää: Michal Hocko [mailto:mhocko@xxxxxxxxxx]
> > åéæé: 2018å3æ19æ 18:38
> > æää: Li,Rongqing <lirongqing@xxxxxxxxx>
> > æé: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> > cgroups@xxxxxxxxxxxxxxx; hannes@xxxxxxxxxxx; Andrey Ryabinin
> > <aryabinin@xxxxxxxxxxxxx>
> > äé: Re: çå: [PATCH] mm/memcontrol.c: speed up to force empty a
> memory
> > cgroup
> >
> > On Mon 19-03-18 10:00:41, Li,Rongqing wrote:
> > >
> > >
> > > > -----éäåä-----
> > > > åää: Michal Hocko [mailto:mhocko@xxxxxxxxxx]
> > > > åéæé: 2018å3æ19æ 16:54
> > > > æää: Li,Rongqing <lirongqing@xxxxxxxxx>
> > > > æé: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> > > > cgroups@xxxxxxxxxxxxxxx; hannes@xxxxxxxxxxx; Andrey Ryabinin
> > > > <aryabinin@xxxxxxxxxxxxx>
> > > > äé: Re: [PATCH] mm/memcontrol.c: speed up to force empty a
> > memory
> > > > cgroup
> > > >
> > > > On Mon 19-03-18 16:29:30, Li RongQing wrote:
> > > > > mem_cgroup_force_empty() tries to free only 32
> > (SWAP_CLUSTER_MAX)
> > > > > pages on each iteration, if a memory cgroup has lots of page
> > > > > cache, it will take many iterations to empty all page cache, so
> > > > > increase the reclaimed number per iteration to speed it up. same
> > > > > as in
> > > > > mem_cgroup_resize_limit()
> > > > >
> > > > > a simple test show:
> > > > >
> > > > > $dd if=aaa of=bbb bs=1k count=3886080
> > > > > $rm -f bbb
> > > > > $time echo
> > 100000000 >/cgroup/memory/test/memory.limit_in_bytes
> > > > >
> > > > > Before: 0m0.252s ===> after: 0m0.178s
> > > >
> > > > Andrey was proposing something similar [1]. My main objection was
> > > > that his approach might lead to over-reclaim. Your approach is
> > > > more conservative because it just increases the batch size. The
> > > > size is still rather arbitrary. Same as SWAP_CLUSTER_MAX but that
> > > > one is a commonly used unit of reclaim in the MM code.
> > > >
> > > > I would be really curious about more detailed explanation why
> > > > having a larger batch yields to a better performance because we
> > > > are doingg SWAP_CLUSTER_MAX batches at the lower reclaim level
> anyway.
> > > >
> > >
> > > Although SWAP_CLUSTER_MAX is used at the lower level, but the call
> > > stack of try_to_free_mem_cgroup_pages is too long, increase the
> > > nr_to_reclaim can reduce times of calling
> > > function[do_try_to_free_pages, shrink_zones, hrink_node ]
> > >
> > > mem_cgroup_resize_limit
> > > --->try_to_free_mem_cgroup_pages: .nr_to_reclaim = max(1024,
> > > --->SWAP_CLUSTER_MAX),
> > > ---> do_try_to_free_pages
> > > ---> shrink_zones
> > > --->shrink_node
> > > ---> shrink_node_memcg
> > > ---> shrink_list <-------loop will happen in this place
> > [times=1024/32]
> > > ---> shrink_page_list
> >
> > Can you actually measure this to be the culprit. Because we should
> > rethink our call path if it is too complicated/deep to perform well.
> > Adding arbitrary batch sizes doesn't sound like a good way to go to me.
>
> Ok, I will try
>
http://pasted.co/4edbcfff
This is result from ftrace graph, it maybe prove that the deep call path leads to low performance.
And when increase reclaiming page in try_to_free_mem_cgroup_pages, it can reduce calling of shrink_slab, which save times, in my cases, page caches occupy most memory, slab is little, but shrink_slab will be called everytime
Mutex_lock 1 us
try_to_free_mem_cgroup_pages
do_try_to_free_pages ! 185.020 us
shrink_node ! 116.529 us
shrink_node_memcg 39.203
shrink_inactive_list 33.960
shrink_slab 72.955
shrink_node 61.502 us
shrink_node_memcg 3.955
shrink_slab 54.296 us
-RongQing
> -RongQing
> > --
> > Michal Hocko
> > SUSE Labs