Re: [PATCH v3] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems

From: Michal Hocko
Date: Thu Dec 08 2022 - 03:09:47 EST


On Wed 07-12-22 13:43:55, Mina Almasry wrote:
> On Wed, Dec 7, 2022 at 3:12 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
[...]
> > Anyway a proper nr_reclaimed tracking should be rather straightforward
> > but I do not expect to make a big difference in practice
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 026199c047e0..1b7f2d8cb128 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1633,7 +1633,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> > LIST_HEAD(ret_folios);
> > LIST_HEAD(free_folios);
> > LIST_HEAD(demote_folios);
> > - unsigned int nr_reclaimed = 0;
> > + unsigned int nr_reclaimed = 0, nr_demoted = 0;
> > unsigned int pgactivate = 0;
> > bool do_demote_pass;
> > struct swap_iocb *plug = NULL;
> > @@ -2065,8 +2065,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> > }
> > /* 'folio_list' is always empty here */
> >
> > - /* Migrate folios selected for demotion */
> > - nr_reclaimed += demote_folio_list(&demote_folios, pgdat);
> > + /*
> > + * Migrate folios selected for demotion.
> > + * Do not consider demoted pages to be reclaimed for the memcg reclaim
> > + * because no charges are really freed during the migration. Global
> > + * reclaim aims at releasing memory from nodes/zones so consider
> > + * demotion to reclaim memory.
> > + */
> > + nr_demoted += demote_folio_list(&demote_folios, pgdat);
> > + if (!cgroup_reclaim(sc))
> > + nr_reclaimed += nr_demoted;
> > +
> > /* Folios that could not be demoted are still in @demote_folios */
> > if (!list_empty(&demote_folios)) {
> > /* Folios which weren't demoted go back on @folio_list for retry: */
> >
> > [...]
>
> Thank you again, but this patch breaks the memory.reclaim nodes arg
> for me. This is my test case. I run it on a machine with 2 memory
> tiers.
>
> Memory tier 1= nodes 0-2
> Memory tier 2= node 3
>
> mkdir -p /sys/fs/cgroup/unified/test
> cd /sys/fs/cgroup/unified/test
> echo $$ > cgroup.procs
> head -c 500m /dev/random > /tmp/testfile
> echo $$ > /sys/fs/cgroup/unified/cgroup.procs
> echo "1m nodes=0-2" > memory.reclaim
>
> In my opinion the expected behavior is for the kernel to demote 1mb of
> memory from nodes 0-2 to node 3.
>
> Actual behavior on the tip of mm-unstable is as expected.
>
> Actual behavior with your patch cherry-picked to mm-unstable is that
> the kernel demotes all 500mb of memory from nodes 0-2 to node 3, and
> returns -EAGAIN to the user. This may be the correct behavior you're
> intending, but it completely breaks the use case I implemented the
> nodes= arg for and listed on the commit message of that change.

Yes, strictly speaking the behavior is correct albeit unexpected. You
have told the kernel to _reclaim_ that much memory but demotion are
simply aging handling rather than a reclaim if the demotion target has a
lot of memory free. This would be the case without any nodemask as well
btw.

I am worried this will popping up again and again. I thought your nodes
subset approach could deal with this but I have overlooked one important
thing in your patch. The user provided nodemask controls where to
reclaim from but it doesn't constrain demotion targets. Is this
intentional? Would it actually make more sense to control demotion by
addint demotion nodes into the nodemask?

--
Michal Hocko
SUSE Labs