Re: [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()

From: Donet Tom

Date: Tue Apr 14 2026 - 06:20:21 EST


Hi

On 4/14/26 3:14 PM, DaeMyung Kang wrote:
free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Signed-off-by: DaeMyung Kang <charsyam@xxxxxxxxx>


Do we need a fixes tag?

-Donet

---
mm/memblock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
return 0;
start = phys_to_virt(map->start);
- end = start + map->size - 1;
+ end = start + map->size;
snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
free_reserved_area(start, end, 0, buf);
map->size = 0;