Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim

From: David Hildenbrand
Date: Thu Apr 06 2023 - 06:32:03 EST


On 05.04.23 20:54, Yosry Ahmed wrote:
We keep track of different types of reclaimed pages through
reclaim_state->reclaimed_slab, and we add them to the reported number
of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg
reclaim, we have no clue if those pages are charged to the memcg under
reclaim.

Slab pages are shared by different memcgs, so a freed slab page may have
only been partially charged to the memcg under reclaim. The same goes for
clean file pages from pruned inodes (on highmem systems) or xfs buffer
pages, there is no simple way to currently link them to the memcg under
reclaim.

Stop reporting those freed pages as reclaimed pages during memcg reclaim.
This should make the return value of writing to memory.reclaim, and may
help reduce unnecessary reclaim retries during memcg charging. Writing to
memory.reclaim on the root memcg is considered as cgroup_reclaim(), but
for this case we want to include any freed pages, so use the
global_reclaim() check instead of !cgroup_reclaim().

Generally, this should make the return value of
try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g.
freed a slab page that was mostly charged to the memcg under reclaim),
the return value of try_to_free_mem_cgroup_pages() can be underestimated,
but this should be fine. The freed pages will be uncharged anyway, and we

Can't we end up in extreme situations where try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount of memory for that cgroup was freed up.

Can you extend on why "this should be fine" ?

I suspect that overestimation might be worse than underestimation. (see my comment proposal below)

can charge the memcg the next time around as we usually do memcg reclaim
in a retry loop.

The next patch performs some cleanups around reclaim_state and adds an
elaborate comment explaining this to the code. This patch is kept
minimal for easy backporting.

Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx

Fixes: ?

Otherwise it's hard to judge how far to backport this.

---

global_reclaim(sc) does not exist in kernels before 6.3. It can be
replaced with:
!cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup)

---
mm/vmscan.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9c1c5e8b24b8f..c82bd89f90364 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
sc->nr_reclaimed - reclaimed);
- sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
- current->reclaim_state->reclaimed_slab = 0;

Worth adding a comment like

/*
* Slab pages cannot universally be linked to a single memcg. So only
* account them as reclaimed during global reclaim. Note that we might
* underestimate the amount of memory reclaimed (but won't overestimate
* it).
*/

but ...

+ if (global_reclaim(sc)) {
+ sc->nr_reclaimed += current->reclaim_state->reclaimed_slab;
+ current->reclaim_state->reclaimed_slab = 0;
+ }
return success ? MEMCG_LRU_YOUNG : 0;
}
@@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
shrink_node_memcgs(pgdat, sc);

... do we want to factor the add+clear into a simple helper such that we can have above comment there?

static void cond_account_reclaimed_slab(reclaim_state, sc)
{
/*
* Slab pages cannot universally be linked to a single memcg. So
* only account them as reclaimed during global reclaim. Note
* that we might underestimate the amount of memory reclaimed
* (but won't overestimate it).
*/
if (global_reclaim(sc)) {
sc->nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;
}
}

Yes, effective a couple LOC more, but still straight-forward for a stable backport

- if (reclaim_state) {
+ if (reclaim_state && global_reclaim(sc)) {
sc->nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;
}

--
Thanks,

David / dhildenb