[PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()

From: DaeMyung Kang

Date: Tue Apr 14 2026 - 06:49:35 EST


free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: DaeMyung Kang <charsyam@xxxxxxxxx>
---
Changes in v2:
- Add Fixes: tag and Cc: stable (per Donet Tom's review).
- v1: https://lore.kernel.org/lkml/

mm/memblock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
return 0;

start = phys_to_virt(map->start);
- end = start + map->size - 1;
+ end = start + map->size;
snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
free_reserved_area(start, end, 0, buf);
map->size = 0;
--
2.43.0