Re: Re: [PATCH v7 05/15] mm/damon: Adaptively adjust regions

From: SeongJae Park
Date: Wed Apr 01 2020 - 04:23:38 EST


On Tue, 31 Mar 2020 17:08:55 +0100 Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> wrote:

> On Wed, 18 Mar 2020 12:27:12 +0100
> SeongJae Park <sjpark@xxxxxxxxxx> wrote:
>
> > From: SeongJae Park <sjpark@xxxxxxxxx>
> >
> > At the beginning of the monitoring, DAMON constructs the initial regions
> > by evenly splitting the memory mapped address space of the process into
> > the user-specified minimal number of regions. In this initial state,
> > the assumption of the regions (pages in same region have similar access
> > frequencies) is normally not kept and thus the monitoring quality could
> > be low. To keep the assumption as much as possible, DAMON adaptively
> > merges and splits each region.
> >
> > For each ``aggregation interval``, it compares the access frequencies of
> > adjacent regions and merges those if the frequency difference is small.
> > Then, after it reports and clears the aggregated access frequency of
> > each region, it splits each region into two regions if the total number
> > of regions is smaller than the half of the user-specified maximum number
> > of regions.
> >
> > In this way, DAMON provides its best-effort quality and minimal overhead
> > while keeping the bounds users set for their trade-off.
> >
> > Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx>
>
> A few more edge cases in here, and a suggestion that might be more costly
> but lead to simpler code.

Thank you for finding those!

>
> Jonathan
>
> > ---
> > include/linux/damon.h | 6 +-
> > mm/damon.c | 148 ++++++++++++++++++++++++++++++++++++++++--
> > 2 files changed, 145 insertions(+), 9 deletions(-)
> >
[...]
> > diff --git a/mm/damon.c b/mm/damon.c
> > index 018016793555..23c0de3b502e 100644
> > --- a/mm/damon.c
> > +++ b/mm/damon.c
[...]
> > +
> > +/*
> > + * Split a region into two small regions
> > + *
> > + * r the region to be split
> > + * sz_r size of the first sub-region that will be made
> > + */
> > +static void damon_split_region_at(struct damon_ctx *ctx,
> > + struct damon_region *r, unsigned long sz_r)
> > +{
> > + struct damon_region *new;
> > +
> > + new = damon_new_region(ctx, r->vm_start + sz_r, r->vm_end);
> > + r->vm_end = new->vm_start;
>
> We may well have a sampling address that is in the wrong region.
> It should have little effect on the stats as will fix on next sample
> but in my view still worth cleaning up.

Good catch! I will fix this in next spin.

>
> > +
> > + damon_insert_region(new, r, damon_next_region(r));
> > +}
> > +
[...]
> > @@ -571,21 +689,29 @@ static int kdamond_fn(void *data)
> > struct damon_task *t;
> > struct damon_region *r, *next;
> > struct mm_struct *mm;
> > + unsigned int max_nr_accesses;
> >
> > pr_info("kdamond (%d) starts\n", ctx->kdamond->pid);
> > kdamond_init_regions(ctx);
> > while (!kdamond_need_stop(ctx)) {
> > + max_nr_accesses = 0;
> > damon_for_each_task(ctx, t) {
> > mm = damon_get_mm(t);
> > if (!mm)
> > continue;
> > - damon_for_each_region(r, t)
> > + damon_for_each_region(r, t) {
> > kdamond_check_access(ctx, mm, r);
> > + max_nr_accesses = max(r->nr_accesses,
> > + max_nr_accesses);
> > + }
> > mmput(mm);
> > }
> >
> > - if (kdamond_aggregate_interval_passed(ctx))
> > + if (kdamond_aggregate_interval_passed(ctx)) {
> > + kdamond_merge_regions(ctx, max_nr_accesses / 10);
> > kdamond_reset_aggregated(ctx);
> > + kdamond_split_regions(ctx);
> > + }
>
> I wonder if it would be simpler to split the sampling address setup and
> mkold from the access check. We would have to walk regions twice,
> but not have to bother separately dealing with updating some regions
> if they are modified in the above block.
>
> Also, the above has some overhead, so will bias that first sample each
> time the block above runs. If we do the mkold afterwards it will make
> much less difference.

Agreed, it will make code much more simple and easy to read. However, I'm not
sure how much of the overhead will be biased because 'aggregate interval' is
usually larger than 'sampling interval'. Anyway, Will change so in the next
spin!


Thanks,
SeongJae Park

[...]