Re: [PATCH 1/2 v3] mm: vmscan: do not pass reclaimed slab to vmpressure
From: vinayak menon
Date: Thu Feb 02 2017 - 06:25:54 EST
On Thu, Feb 2, 2017 at 4:18 PM, Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> On Thu 02-02-17 11:44:22, Michal Hocko wrote:
>> On Tue 31-01-17 14:32:08, Vinayak Menon wrote:
>> > During global reclaim, the nr_reclaimed passed to vmpressure
>> > includes the pages reclaimed from slab. But the corresponding
>> > scanned slab pages is not passed. This can cause total reclaimed
>> > pages to be greater than scanned, causing an unsigned underflow
>> > in vmpressure resulting in a critical event being sent to root
>> > cgroup. So do not consider reclaimed slab pages for vmpressure
>> > calculation. The reclaimed pages from slab can be excluded because
>> > the freeing of a page by slab shrinking depends on each slab's
>> > object population, making the cost model (i.e. scan:free) different
>> > from that of LRU.
>>
>> This might be true but what happens if the slab reclaim contributes
>> significantly to the overal reclaim? This would be quite rare but not
>> impossible.
>>
>> I am wondering why we cannot simply make cap nr_reclaimed to nr_scanned
>> and be done with this all? Sure it will be imprecise but the same will
>> be true with this approach.
Thinking of a case where 100 LRU pages were scanned and only 10 were reclaimed.
Now, say slab reclaimed 100 pages and we have no idea how many were scanned.
The actual vmpressure of 90 will now be 0 because of the addition on 100 slab
pages. So underflow was not the only issue, but incorrect vmpressure.
Even though the slab reclaimed is not accounted in vmpressure, the
slab reclaimed
pages will have a feedback effect on the LRU pressure right ? i.e. the
next LRU scan
will either be less or delayed if enough slab pages are reclaimed, in
turn lowering the
vmpressure or delaying it ? If that is so, the current approach of
neglecting slab reclaimed
will provide more accurate vmpressure than capping nr_reclaimed to nr_scanned ?
Our internal tests on Android actually shows the problem. When
vmpressure with slab
reclaimed added is used to kill tasks, it does not kick in at the right time.
>
> In other words something as "beautiful" as the following:
> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> index 149fdf6c5c56..abea42817dd0 100644
> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -236,6 +236,15 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> return;
>
> /*
> + * Due to accounting issues - e.g. THP contributing 1 to scanned but
> + * potentially much more to reclaimed or SLAB pages not contributing
> + * to scanned at all - we have to skew reclaimed to prevent from
> + * wrong pressure levels due to overflows.
> + */
> + if (reclaimed > scanned)
> + reclaimed = scanned;
> +
> + /*
This underflow problem is fixed by a separate patch
https://lkml.org/lkml/2017/1/27/48
That patch performs this check only once at the end of a window period.
Is that ok ?
Thanks,
Vinayak