On Tue, 4 Nov 2014, Daniel J Blueman wrote:
On 11/04/2014 03:38 AM, Thomas Gleixner wrote:
On Sun, 2 Nov 2014, Daniel J Blueman wrote:
On larger x64-64 systems, use a 2GB memory block size to reduce sysfs
entry creation time by 16x. Large is defined as 64GB or more memory.
This changelog sucks.
It neither tells which sysfs entries are meant nor does it explain
what the actual effect of this change is aside of speeding up some
random sysfs thingy.
How about this?
On large-memory systems of 64GB or more with memory hot-plug enabled, use a
2GB memory block size. Eg with 64GB memory, this reduces the number of
directories in /sys/devices/system/memory from 512 to 32, making it more
manageable, and reducing the creation time accordingly.
It still does not tell what the downside is of this and why you think
it does not matter.
@@ -1247,9 +1246,9 @@ static unsigned long probe_memory_block_size(void)
/* start from 2g */
unsigned long bz = 1UL<<31;
-#ifdef CONFIG_X86_UV
- if (is_uv_system()) {
- printk(KERN_INFO "UV: memory block size 2GB\n");
+#ifdef CONFIG_X86_64
And this brainless 's/CONFIG_X86_UV/CONFIG_X86_64/' sucks even
more. I'm sure you can figure out the WHY yourself.
The benefit of this is applicable to other architectures. I'm unable to test
the change, but if you agree it's conservative enough, I'll drop the ifdef?
Which other architectures? Care to turn on your brain before replying?