[RFC] mm: memblock: change default cnt for regions from 1 to 0
From: Zubair Lutfullah Kakakhel
Date: Thu Oct 23 2014 - 12:58:54 EST
The default region counts are set to 1 with a comment saying empty
dummy entry.
If this is a dummy entry, should this be changed to 0?
We have faced this in mips/kernel/setup.c arch_mem_init.
cma uses memblock. But even with cma disabled.
The for_each_memblock(reserved, reg) goes inside the loop.
Even without any reserved regions.
Traced it to the following, when the macro
for_each_memblock(memblock_type, region) is used.
It expands to add the cnt variable.
for (region = memblock.memblock_type.regions; \
region < (memblock.memblock_type.regions + memblock.memblock_type.cnt); \
region++)
In the corner case, that there are no reserved regions.
Due to the default 1 value of cnt.
The loop under for_each_memblock still runs once.
Even when there is no reserved region.
Is this by design? or unintentional?
It might be that this loop runs an extra time every instance out there?
---
mm/memblock.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 6d2f219..b91301c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -33,16 +33,16 @@ static struct memblock_region memblock_physmem_init_regions[INIT_PHYSMEM_REGIONS
struct memblock memblock __initdata_memblock = {
.memory.regions = memblock_memory_init_regions,
- .memory.cnt = 1, /* empty dummy entry */
+ .memory.cnt = 0, /* empty dummy entry */
.memory.max = INIT_MEMBLOCK_REGIONS,
.reserved.regions = memblock_reserved_init_regions,
- .reserved.cnt = 1, /* empty dummy entry */
+ .reserved.cnt = 0, /* empty dummy entry */
.reserved.max = INIT_MEMBLOCK_REGIONS,
#ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
.physmem.regions = memblock_physmem_init_regions,
- .physmem.cnt = 1, /* empty dummy entry */
+ .physmem.cnt = 0, /* empty dummy entry */
.physmem.max = INIT_PHYSMEM_REGIONS,
#endif
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/