Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
From: Andrey Ryabinin
Date: Thu Feb 22 2018 - 10:15:38 EST
On 02/22/2018 05:09 PM, Michal Hocko wrote:
> On Thu 22-02-18 16:50:33, Andrey Ryabinin wrote:
>> On 02/21/2018 11:17 PM, Andrew Morton wrote:
>>> On Fri, 19 Jan 2018 16:11:18 +0100 Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>>>
>>>> And to be honest, I do not really see why keeping retrying from
>>>> mem_cgroup_resize_limit should be so much faster than keep retrying from
>>>> the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway.
>>>> mem_cgroup_resize_limit loop adds _some_ overhead but I am not really
>>>> sure why it should be that large.
>>>
>>> Maybe restarting the scan lots of times results in rescanning lots of
>>> ineligible pages at the start of the list before doing useful work?
>>>
>>> Andrey, are you able to determine where all that CPU time is being spent?
>>>
>>
>> I should have been more specific about the test I did. The full script looks like this:
>>
>> mkdir -p /sys/fs/cgroup/memory/test
>> echo $$ > /sys/fs/cgroup/memory/test/tasks
>> cat 4G_file > /dev/null
>> while true; do cat 4G_file > /dev/null; done &
>> loop_pid=$!
>> perf stat echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
>> echo -1 > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
>> kill $loop_pid
>>
>>
>> I think the additional loops add some overhead and it's not that big by itself, but
>> this small overhead allows task to refill slightly more pages, increasing
>> the total amount of pages that mem_cgroup_resize_limit() need to reclaim.
>>
>> By using the following commands to show the the amount of reclaimed pages:
>> perf record -e vmscan:mm_vmscan_memcg_reclaim_end echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
>> perf script|cut -d '=' -f 2| paste -sd+ |bc
>>
>> I've got 1259841 pages (4.9G) with the patch vs 1394312 pages (5.4G) without it.
>
> So how does the picture changes if you have multiple producers?
>
Drastically, in favor of the patch. But numbers *very* fickle from run to run.
Inside 5G vm with 4 cpus (qemu -m 5G -smp 4) and 4 processes in cgroup reading 1G files:
"while true; do cat /1g_f$i > /dev/null; done &"
with the patch:
best: 1.04 secs, 9.7G reclaimed
worst: 2.2 secs, 16G reclaimed.
without:
best: 5.4 sec, 35G reclaimed
worst: 22.2 sec, 136G reclaimed