Re: [PATCH V2] mm/memcontrol: add per-memcg pgpgin/pswpin counter
From: Nhat Pham
Date: Fri Oct 11 2024 - 16:02:04 EST
On Fri, Sep 13, 2024 at 8:21 AM Jingxiang Zeng
<jingxiangzeng.cas@xxxxxxxxx> wrote:
>
> From: Jingxiang Zeng <linuszeng@xxxxxxxxxxx>
>
> In proactive memory reclamation scenarios, it is necessary to estimate the
> pswpin and pswpout metrics of the cgroup to determine whether to continue
> reclaiming anonymous pages in the current batch. This patch will collect
> these metrics and expose them.
+1 - this is also useful for zswap shrinker enablement, after which an
anon page can be loaded back in either from swap or zswap.
Differentiating these two situations helps a lot with performance
regression diagnostics.
We have host level metrics, but they become less useful when we
combine workloads with different characteristics in the same host.
>
> Link: https://lkml.kernel.org/r/20240830082244.156923-1-jingxiangzeng.cas@xxxxxxxxx
> Signed-off-by: Jingxiang Zeng <linuszeng@xxxxxxxxxxx>
> Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxxxx>
> Cc: Muchun Song <muchun.song@xxxxxxxxx>
> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
> Cc: Shakeel Butt <shakeel.butt@xxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
> mm/memcontrol.c | 2 ++
> mm/page_io.c | 4 ++++
> 2 files changed, 6 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6efbfc9399d0..dbc1d43a5c4c 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -418,6 +418,8 @@ static const unsigned int memcg_vm_event_stat[] = {
> PGPGIN,
> PGPGOUT,
> #endif
> + PSWPIN,
> + PSWPOUT,
> PGSCAN_KSWAPD,
> PGSCAN_DIRECT,
> PGSCAN_KHUGEPAGED,
> diff --git a/mm/page_io.c b/mm/page_io.c
> index b6f1519d63b0..4bc77d1c6bfa 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -310,6 +310,7 @@ static inline void count_swpout_vm_event(struct folio *folio)
> }
> count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT);
> #endif
> + count_memcg_folio_events(folio, PSWPOUT, folio_nr_pages(folio));
> count_vm_events(PSWPOUT, folio_nr_pages(folio));
> }
>
> @@ -505,6 +506,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> for (p = 0; p < sio->pages; p++) {
> struct folio *folio = page_folio(sio->bvec[p].bv_page);
>
> + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> folio_mark_uptodate(folio);
> folio_unlock(folio);
> }
> @@ -588,6 +590,7 @@ static void swap_read_folio_bdev_sync(struct folio *folio,
> * attempt to access it in the page fault retry time check.
> */
> get_task_struct(current);
> + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> count_vm_event(PSWPIN);
> submit_bio_wait(&bio);
> __end_swap_bio_read(&bio);
> @@ -603,6 +606,7 @@ static void swap_read_folio_bdev_async(struct folio *folio,
> bio->bi_iter.bi_sector = swap_folio_sector(folio);
> bio->bi_end_io = end_swap_bio_read;
> bio_add_folio_nofail(bio, folio, folio_size(folio), 0);
> + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> count_vm_event(PSWPIN);
Not related to this patch, but why does the global stats not take into
account large folios here... `count_vm_event(PSWPIN);`?
Acked-by: Nhat Pham <nphamcs@xxxxxxxxx>