Re: [RFC/RFT PATCH 1/6] numa: Move numa implementation to common code
From: Atish Patra
Date: Wed Aug 19 2020 - 15:18:57 EST
On Tue, Aug 18, 2020 at 8:19 PM Anshuman Khandual
<anshuman.khandual@xxxxxxx> wrote:
>
>
>
> On 08/15/2020 03:17 AM, Atish Patra wrote:
> > ARM64 numa implementation is generic enough that RISC-V can reuse that
> > implementation with very minor cosmetic changes. This will help both
> > ARM64 and RISC-V in terms of maintanace and feature improvement
> >
> > Move the numa implementation code to common directory so that both ISAs
> > can reuse this. This doesn't introduce any function changes for ARM64.
> >
> > Signed-off-by: Atish Patra <atish.patra@xxxxxxx>
> > ---
> > arch/arm64/Kconfig | 1 +
> > arch/arm64/include/asm/numa.h | 45 +---------------
> > arch/arm64/mm/Makefile | 1 -
> > drivers/base/Kconfig | 6 +++
> > drivers/base/Makefile | 1 +
> > .../mm/numa.c => drivers/base/arch_numa.c | 0
> > include/asm-generic/numa.h | 51 +++++++++++++++++++
> > 7 files changed, 60 insertions(+), 45 deletions(-)
> > rename arch/arm64/mm/numa.c => drivers/base/arch_numa.c (100%)
> > create mode 100644 include/asm-generic/numa.h
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 6d232837cbee..955a0cf75b16 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -960,6 +960,7 @@ config HOTPLUG_CPU
> > # Common NUMA Features
> > config NUMA
> > bool "NUMA Memory Allocation and Scheduler Support"
> > + select GENERIC_ARCH_NUMA
>
> So this introduces a generic NUMA framework selectable with GENERIC_ARCH_NUMA.
>
> > select ACPI_NUMA if ACPI
> > select OF_NUMA
> > help
> > diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h
> > index 626ad01e83bf..8c8cf4297cc3 100644
> > --- a/arch/arm64/include/asm/numa.h
> > +++ b/arch/arm64/include/asm/numa.h
> > @@ -3,49 +3,6 @@
> > #define __ASM_NUMA_H
> >
> > #include <asm/topology.h>
> > -
> > -#ifdef CONFIG_NUMA
> > -
> > -#define NR_NODE_MEMBLKS (MAX_NUMNODES * 2)
> > -
> > -int __node_distance(int from, int to);
> > -#define node_distance(a, b) __node_distance(a, b)
> > -
> > -extern nodemask_t numa_nodes_parsed __initdata;
> > -
> > -extern bool numa_off;
> > -
> > -/* Mappings between node number and cpus on that node. */
> > -extern cpumask_var_t node_to_cpumask_map[MAX_NUMNODES];
> > -void numa_clear_node(unsigned int cpu);
> > -
> > -#ifdef CONFIG_DEBUG_PER_CPU_MAPS
> > -const struct cpumask *cpumask_of_node(int node);
> > -#else
> > -/* Returns a pointer to the cpumask of CPUs on Node 'node'. */
> > -static inline const struct cpumask *cpumask_of_node(int node)
> > -{
> > - return node_to_cpumask_map[node];
> > -}
> > -#endif
> > -
> > -void __init arm64_numa_init(void);
> > -int __init numa_add_memblk(int nodeid, u64 start, u64 end);
> > -void __init numa_set_distance(int from, int to, int distance);
> > -void __init numa_free_distance(void);
> > -void __init early_map_cpu_to_node(unsigned int cpu, int nid);
> > -void numa_store_cpu_info(unsigned int cpu);
> > -void numa_add_cpu(unsigned int cpu);
> > -void numa_remove_cpu(unsigned int cpu);
> > -
> > -#else /* CONFIG_NUMA */
> > -
> > -static inline void numa_store_cpu_info(unsigned int cpu) { }
> > -static inline void numa_add_cpu(unsigned int cpu) { }
> > -static inline void numa_remove_cpu(unsigned int cpu) { }
> > -static inline void arm64_numa_init(void) { }
> > -static inline void early_map_cpu_to_node(unsigned int cpu, int nid) { }
> > -
> > -#endif /* CONFIG_NUMA */
> > +#include <asm-generic/numa.h>
> >
> > #endif /* __ASM_NUMA_H */
> > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
> > index d91030f0ffee..928c308b044b 100644
> > --- a/arch/arm64/mm/Makefile
> > +++ b/arch/arm64/mm/Makefile
> > @@ -6,7 +6,6 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
> > obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
> > obj-$(CONFIG_PTDUMP_CORE) += dump.o
> > obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o
> > -obj-$(CONFIG_NUMA) += numa.o
> > obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
> > KASAN_SANITIZE_physaddr.o += n
> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 8d7001712062..73c2151de194 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -210,4 +210,10 @@ config GENERIC_ARCH_TOPOLOGY
> > appropriate scaling, sysfs interface for reading capacity values at
> > runtime.
> >
> > +config GENERIC_ARCH_NUMA
> > + bool
> > + help
> > + Enable support for generic numa implementation. Currently, RISC-V
> > + and ARM64 uses it.
> > +
> > endmenu
> > diff --git a/drivers/base/Makefile b/drivers/base/Makefile
> > index 157452080f3d..c3d02c644222 100644
> > --- a/drivers/base/Makefile
> > +++ b/drivers/base/Makefile
> > @@ -23,6 +23,7 @@ obj-$(CONFIG_PINCTRL) += pinctrl.o
> > obj-$(CONFIG_DEV_COREDUMP) += devcoredump.o
> > obj-$(CONFIG_GENERIC_MSI_IRQ_DOMAIN) += platform-msi.o
> > obj-$(CONFIG_GENERIC_ARCH_TOPOLOGY) += arch_topology.o
> > +obj-$(CONFIG_GENERIC_ARCH_NUMA) += arch_numa.o
> >
> > obj-y += test/
> >
> > diff --git a/arch/arm64/mm/numa.c b/drivers/base/arch_numa.c
> > similarity index 100%
> > rename from arch/arm64/mm/numa.c
> > rename to drivers/base/arch_numa.c
>
> drivers/base/ does not seem right place to host generic NUMA code.
I chose drivers/base because the common topology code is also present there.
drivers/base/arch_topology.c under GENERIC_ARCH_TOPOLOGY
The idea is to keep all common arch(at least between RISC-V & ARM64)
related code at one place.
> Probably it should be either mm/ or kernel/. The other question here
I am fine with mm/arch_numa.c as well if that is preferred over driver/base.
> would be if existing arm64 NUMA implementation is sufficient enough
> for generic NUMA. I would expect any platform selecting this config
> should get some NUMA enabled, will be that be true with present code ?
It is for RISC-V. Here is the RISC-V support patch (last patch in the series)
http://lists.infradead.org/pipermail/linux-riscv/2020-August/001659.html
> Otherwise it will be difficult to name it as GENERIC_ARCH_NUMA.
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/linux-riscv
--
Regards,
Atish