Re: [RFC] mm: activate access-more-than-once page via NUMA balancing
From: Mel Gorman
Date: Thu Mar 25 2021 - 07:58:24 EST
On Thu, Mar 25, 2021 at 12:33:45PM +0800, Huang, Ying wrote:
> > I caution against this patch.
> >
> > It's non-deterministic for a number of reasons. As it requires NUMA
> > balancing to be enabled, the pageout behaviour of a system changes when
> > NUMA balancing is active. If this led to pages being artificially and
> > inappropriately preserved, NUMA balancing could be disabled for the
> > wrong reasons. It only applies to pages that have no target node so
> > memory policies affect which pages are activated differently. Similarly,
> > NUMA balancing does not scan all VMAs and some pages may never trap a
> > NUMA fault as a result. The timing of when an address space gets scanned
> > is driven by the locality of pages and so the timing of page activation
> > potentially becomes linked to whether pages are local or need to migrate
> > (although not right now for this patch as it only affects pages with a
> > target nid of NUMA_NO_NODE). In other words, changes in NUMA balancing
> > that affect migration potentially affect the aging rate. Similarly,
> > the activate rate of a process with a single thread and multiple threads
> > potentially have different activation rates.
> >
> > Finally, the NUMA balancing scan algorithm is sub-optimal. It potentially
> > scans the entire address space even though only a small number of pages
> > are scanned. This is particularly problematic when a process has a lot
> > of threads because threads are redundantly scanning the same regions. If
> > NUMA balancing ever introduced range tracking of faulted pages to limit
> > how much scanning it has to do, it would inadvertently cause a change in
> > page activation rate.
> >
> > NUMA balancing is about page locality, it should not get conflated with
> > page aging.
>
> I understand your concerns about binding the NUMA balancing and page
> reclaiming. The requirement of the page locality and page aging is
> different, so the policies need to be different. This is the wrong part
> of the patch.
>
> From another point of view, it's still possible to share some underlying
> mechanisms (and code) between them. That is, scanning the page tables
> to make pages unaccessible and capture the page accesses via the page
> fault.
Potentially yes but not necessarily recommended for page aging. NUMA
balancing has to be careful about the rate it scans pages to avoid
excessive overhead so it's driven by locality. The scanning happens
within a tasks context so during that time, the task is not executing
its normal work and it incurs the overhead for faults. Generally, this
is not too much overhead because pages get migrated locally, the scan
rate drops and so does the overhead.
However, if you want to drive page aging, that is constant so the rate
could not be easily adapted in a way that would be deterministic.
> Now these page accessing information is used for the page
> locality. Do you think it's a good idea to use these information for
> the page aging too (but with a different policy as you pointed out)?
>
I'm not completely opposed to it but I think the overhead it would
introduce could be severe. Worse, if a workload fits in memory and there
is limited to no memory pressure, it's all overhead for no gain. Early
generations of NUMA balancing had to find a balance to sure the gains
from locality exceeded the cost of measuring locality and doing the same
for page aging in some ways is even more challenging.
> From yet another point of view :-), in current NUMA balancing
> implementation, it's assumed that the node private pages can fit in the
> accessing node. But this may be not always true. Is it a valid
> optimization to migrate the hot private pages first?
>
I'm not sure how the hotness of pages could be ranked. At the time of a
hinting fault, the page is by definition active now because it was been
accessed. Prioritising what pages to migrate based on the number of faults
that have been trapped would have to be stored somewhere.
--
Mel Gorman
SUSE Labs