From: Vlastimil Babka
Date: Wed Apr 25 2018 - 12:24:49 EST

On 04/25/2018 02:52 PM, Roman Gushchin wrote:
> On Wed, Apr 25, 2018 at 09:19:29AM +0530, Vijayanand Jitta wrote:
>>>>>> Idk, I don't like the idea of adding a counter outside of the vm counters
>>>>>> infrastructure, and I definitely wouldn't touch the exposed
>>>>>> nr_slab_reclaimable and nr_slab_unreclaimable fields.
>>>>> We would be just making the reported values more precise wrt reality.
>>>> It depends on if we believe that only slab memory can be reclaimable
>>>> or not. If yes, this is true, otherwise not.
>>>> My guess is that some drivers (e.g. networking) might have buffers,
>>>> which are reclaimable under mempressure, and are allocated using
>>>> the page allocator. But I have to look closer...
>>> One such case I have encountered is that of the ION page pool. The page pool
>>> registers a shrinker. When not in any memory pressure page pool can go high
>>> and thus cause an mmap to fail when OVERCOMMIT_GUESS is set. I can send
>>> a patch to account ION page pool pages in NR_INDIRECTLY_RECLAIMABLE_BYTES.

FYI, we have discussed this at LSF/MM and agreed to try the kmalloc
reclaimable caches idea. The existing counter could then remain for page
allocator users such as ION. It's a bit weird to have it in bytes and
not pages then, IMHO. What if we hid it from /proc/vmstat now so it
doesn't become ABI, and later convert it to page granularity and expose
it under a name such as "nr_other_reclaimable" ?


> Perfect!
> This is exactly what I've expected.
>>> Thanks,
>>> Vinayak
>> As Vinayak mentioned NR_INDIRECTLY_RECLAIMABLE_BYTES can be used to solve the issue
>> with ION page pool when OVERCOMMIT_GUESS is set, the patch for the same can be
>> found here
> This makes perfect sense to me.
> Please, fell free to add:
> Acked-by: Roman Gushchin <guro@xxxxxx>
> Thank you!