[patch] x86: increase CONFIG_NODES_SHIFT max to 10

From: David Rientjes
Date: Wed Mar 10 2010 - 18:42:32 EST


Some larger systems require more than 512 nodes, so increase the maximum
CONFIG_NODES_SHIFT to 10 for a new max of 1024 nodes.

This was tested with numa=fake=64M on systems with more than 64GB of RAM.
A total of 1022 nodes were initialized.

Successfully builds with no additional warnings on x86_64 allyesconfig.

Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
---
Greg KH has queued up numa-fix-BUILD_BUG_ON-for-node_read_distance.patch
for 2.6.35 to fix the build error when CONFIG_NODES_SHIFT is set to 10.
See http://lkml.org/lkml/2010/3/10/390

arch/x86/Kconfig | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1213,8 +1213,8 @@ config NUMA_EMU

config NODES_SHIFT
int "Maximum NUMA Nodes (as a power of 2)" if !MAXSMP
- range 1 9
- default "9" if MAXSMP
+ range 1 10
+ default "10" if MAXSMP
default "6" if X86_64
default "4" if X86_NUMAQ
default "3"
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/