Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align

From: Vlastimil Babka

Date: Wed Jan 07 2026 - 06:43:20 EST


On 1/5/26 09:02, Harry Yoo wrote:
> When both KASAN and SLAB_STORE_USER are enabled, accesses to
> struct kasan_alloc_meta fields can be misaligned on 64-bit architectures.
> This occurs because orig_size is currently defined as unsigned int,
> which only guarantees 4-byte alignment. When struct kasan_alloc_meta is
> placed after orig_size, it may end up at a 4-byte boundary rather than
> the required 8-byte boundary on 64-bit systems.

Oops.

> Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
> are assumed to require 64-bit accesses to be 64-bit aligned.
> See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
> "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
>
> Change orig_size from unsigned int to unsigned long to ensure proper
> alignment for any subsequent metadata. This should not waste additional
> memory because kmalloc objects are already aligned to at least
> ARCH_KMALLOC_MINALIGN.

I'll add:

Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/

since that's useful context and discussion.

> Suggested-by: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc")
> Signed-off-by: Harry Yoo <harry.yoo@xxxxxxxxxx>

As the problem was introduced in 6.1, doesn't seem urgent to push as 6.19 rc
fix, so keeping it as part of the series (where it's a necessary
prerequisity per the Closes: link above) and stable backporting later seems
indeed sufficient. Thanks.