Re: [PATCH v2] mm: memcg: Use larger batches for proactive reclaim

From: T.J. Mercier
Date: Fri Feb 02 2024 - 17:29:51 EST


On Fri, Feb 2, 2024 at 2:14 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> On Fri, Feb 2, 2024 at 2:10 PM T.J. Mercier <tjmercier@xxxxxxxxxx> wrote:
> >
> > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> > reclaim") we passed the number of pages for the reclaim request directly
> > to try_to_free_mem_cgroup_pages, which could lead to significant
> > overreclaim. After 0388536ac291 the number of pages was limited to a
> > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
> > However such a small batch size caused a regression in reclaim
> > performance due to many more reclaim start/stop cycles inside
> > memory_reclaim.
> >
> > Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> > nodes and cgroups over which the pages are spread. As such, the bigger
> > the request, the bigger the absolute overreclaim error. Historic
> > in-kernel users of reclaim have used fixed, small sized requests to
> > approach an appropriate reclaim rate over time. When we reclaim a user
> > request of arbitrary size, use decaying batch sizes to manage error while
> > maintaining reasonable throughput.
> >
> > root - full reclaim pages/sec time (sec)
> > pre-0388536ac291 : 68047 10.46
> > post-0388536ac291 : 13742 inf
> > (reclaim-reclaimed)/4 : 67352 10.51
> >
> > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB)
> > pre-0388536ac291 : 258822 1.12 107.8
> > post-0388536ac291 : 105174 2.49 3.5
> > (reclaim-reclaimed)/4 : 233396 1.12 -7.4
> >
> > /uid_0 - full reclaim pages/sec time (sec)
> > pre-0388536ac291 : 72334 7.09
> > post-0388536ac291 : 38105 14.45
> > (reclaim-reclaimed)/4 : 72914 6.96
> >
> > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> > Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
>
> LGTM with a nit below:
> Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>

Thanks

> >
> > ---
> > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný
> >
> > mm/memcontrol.c | 5 ++++-
> > 1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 46d8d02114cf..e6f921555e07 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> > while (nr_reclaimed < nr_to_reclaim) {
> > unsigned long reclaimed;
> >
> > + /* Will converge on zero, but reclaim enforces a minimum */
> > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
> > +
> > if (signal_pending(current))
> > return -EINTR;
> >
> > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> > lru_add_drain_all();
> >
> > reclaimed = try_to_free_mem_cgroup_pages(memcg,
> > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> > + batch_size,
> > GFP_KERNEL, reclaim_options);
>
> I think the above two lines should now fit into one.

It goes out to 81 characters. I wasn't brave enough, even though the
80 char limit is no more. :)

This takes it out to 100 but gets rid of batch_size if folks are ok with it:

reclaimed = try_to_free_mem_cgroup_pages(memcg,
- min(nr_to_reclaim -
nr_reclaimed, SWAP_CLUSTER_MAX),
+ /* Will converge on zero, but
reclaim enforces a minimum */
+ (nr_to_reclaim - nr_reclaimed) / 4,
GFP_KERNEL, reclaim_options);