Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

From: Roman Gushchin
Date: Tue Mar 30 2021 - 14:59:42 EST


On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> > [v17,00/19] The new cgroup slab memory controller
> > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> >
> > ```bash
> > #!/bin/bash
> >
> > cat /proc/cgroups | grep memory
> >
> > cd /sys/fs/cgroup/memory
> >
> > for i in range{1..500}
> > do
> > mkdir test
> > echo $$ > test/cgroup.procs
> > sleep 60 &
> > echo $$ > cgroup.procs
> > echo `cat test/cgroup.procs` > cgroup.procs
> > rmdir test
> > done
> >
> > cat /proc/cgroups | grep memory
> > ```
> >
> > Patch 1 aims to fix page charging in page replacement.
> > Patch 2-5 are code cleanup and simplification.
> > Patch 6-15 convert LRU pages pin to the objcg direction.
>
> The main concern I have with *just* reparenting LRU pages is that for
> the long running systems, the root memcg will become a dumping ground.
> In addition a job running multiple times on a machine will see
> inconsistent memory usage if it re-accesses the file pages which were
> reparented to the root memcg.

I agree, but also the reparenting is not the perfect thing in a combination
with any memory protections (e.g. memory.low).

Imagine the following configuration:
workload.slice
- workload_gen_1.service memory.min = 30G
- workload_gen_2.service memory.min = 30G
- workload_gen_3.service memory.min = 30G
...

Parent cgroup and several generations of the child cgroup, protected by a memory.low.
Once the memory is getting reparented, it's not protected anymore.

I guess we need something smarter: e.g. reassign a page to a different
cgroup if the page is activated/rotated and is currently on a dying lru.

Also, I'm somewhat concerned about the interaction of the reparenting
with the writeback and dirty throttling. How does it work together?

Thanks!