[PATCHv2 2/2] zsmalloc: simplify read begin/end logic

From: Sergey Senozhatsky

Date: Wed Jan 07 2026 - 00:24:11 EST


From: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>

When we switched from using class->size (for spans detection)
to actual compressed object size, we had to compensate for
the fact that class->size implicitly took inlined handle
into consideration. In fact, instead of adjusting the size
of compressed object (adding handle offset for non-huge size
classes), we can move some lines around and simplify the
code: there are already paths in read_begin/end that compensate
for inlined object handle offset.

Signed-off-by: Yosry Ahmed <yosry.ahmed@xxxxxxxxx>
---
mm/zsmalloc.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 119c196a287a..cc3d9501ae21 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1088,7 +1088,7 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
off = offset_in_page(class->size * obj_idx);

if (!ZsHugePage(zspage))
- mem_len += ZS_HANDLE_SIZE;
+ off += ZS_HANDLE_SIZE;

if (off + mem_len <= PAGE_SIZE) {
/* this object is contained entirely within a page */
@@ -1110,9 +1110,6 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
0, sizes[1]);
}

- if (!ZsHugePage(zspage))
- addr += ZS_HANDLE_SIZE;
-
return addr;
}
EXPORT_SYMBOL_GPL(zs_obj_read_begin);
@@ -1133,11 +1130,9 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
off = offset_in_page(class->size * obj_idx);

if (!ZsHugePage(zspage))
- mem_len += ZS_HANDLE_SIZE;
+ off += ZS_HANDLE_SIZE;

if (off + mem_len <= PAGE_SIZE) {
- if (!ZsHugePage(zspage))
- off += ZS_HANDLE_SIZE;
handle_mem -= off;
kunmap_local(handle_mem);
}
--
2.52.0.351.gbe84eed79e-goog