Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> writes:
Hi,
Currently, on our ARM servers with NUMA enabled, we found the cross-die latency
is a little larger that will significantly impact the workload's performance.
So on ARM servers we will rely on the NUMA balancing to avoid the cross-die
accessing. And I posted a patchset[1] to support speculative numa fault to
improve the NUMA balancing's performance according to the principle of data
locality. Moreover, thanks to Huang Ying's patchset[2], which introduced batch
migration as a way to reduce the cost of TLB flush, and it will also benefit
the migration of multiple pages all at once during NUMA balancing.
So we plan to continue to support batch migration in do_numa_page() to improve
the NUMA balancing's performance, but before adding complicated batch migration
algorithm for NUMA balancing, some cleanup and preparation work need to do firstly,
which are done in this patch set. In short, this patchset extends the
migrate_misplaced_page() interface to support batch migration, and no functional
changes intended.
Will these cleanup benefit anything except batching migration? If not,
I suggest you to post the whole series. In this way, people will be
more clear about why we need these cleanup.
--
Best Regards,
Huang, Ying
[1] https://lore.kernel.org/lkml/cover.1639306956.git.baolin.wang@xxxxxxxxxxxxxxxxx/t/#mc45929849b5d0e29b5fdd9d50425f8e95b8f2563
[2] https://lore.kernel.org/all/20230213123444.155149-1-ying.huang@xxxxxxxxx/T/#u
Baolin Wang (4):
mm: migrate: move migration validation into numa_migrate_prep()
mm: migrate: move the numamigrate_isolate_page() into do_numa_page()
mm: migrate: change migrate_misplaced_page() to support multiple pages
migration
mm: migrate: change to return the number of pages migrated
successfully
include/linux/migrate.h | 15 ++++++++---
mm/huge_memory.c | 19 +++++++++++---
mm/memory.c | 34 +++++++++++++++++++++++-
mm/migrate.c | 58 ++++++++---------------------------------
4 files changed, 71 insertions(+), 55 deletions(-)