Re: [PATCH v2] slab: support for compiler-assisted type-based slab cache partitioning
From: Marco Elver
Date: Thu Apr 16 2026 - 09:43:42 EST
On Wed, 15 Apr 2026 at 16:37, Marco Elver <elver@xxxxxxxxxx> wrote:
>
> Rework the general infrastructure around RANDOM_KMALLOC_CACHES into more
> flexible PARTITION_KMALLOC_CACHES, with the former being a partitioning
> mode of the latter.
>
> Introduce a new mode, TYPED_KMALLOC_CACHES, which leverages a feature
> available in Clang 22 and later, called "allocation tokens" via
> __builtin_infer_alloc_token [1]. Unlike RANDOM_KMALLOC_CACHES, this mode
> deterministically assigns a slab cache to an allocation of type T,
> regardless of allocation site.
>
> The builtin __builtin_infer_alloc_token(<malloc-args>, ...) instructs
> the compiler to infer an allocation type from arguments commonly passed
> to memory-allocating functions and returns a type-derived token ID. The
> implementation passes kmalloc-args to the builtin: the compiler performs
> best-effort type inference, and then recognizes common patterns such as
> `kmalloc(sizeof(T), ...)`, `kmalloc(sizeof(T) * n, ...)`, but also
> `(T *)kmalloc(...)`. Where the compiler fails to infer a type the
> fallback token (default: 0) is chosen.
>
> Note: kmalloc_obj(..) APIs fix the pattern how size and result type are
> expressed, and therefore ensures there's not much drift in which
> patterns the compiler needs to recognize. Specifically, kmalloc_obj()
> and friends expand to `(TYPE *)KMALLOC(__obj_size, GFP)`, which the
> compiler recognizes via the cast to TYPE*.
>
> Clang's default token ID calculation is described as [1]:
>
> typehashpointersplit: This mode assigns a token ID based on the hash
> of the allocated type's name, where the top half ID-space is reserved
> for types that contain pointers and the bottom half for types that do
> not contain pointers.
>
> Separating pointer-containing objects from pointerless objects and data
> allocations can help mitigate certain classes of memory corruption
> exploits [2]: attackers who gains a buffer overflow on a primitive
> buffer cannot use it to directly corrupt pointers or other critical
> metadata in an object residing in a different, isolated heap region.
>
> It is important to note that heap isolation strategies offer a
> best-effort approach, and do not provide a 100% security guarantee,
> albeit achievable at relatively low performance cost. Note that this
> also does not prevent cross-cache attacks: while waiting for future
> features like SLAB_VIRTUAL [3] to provide physical page isolation, this
> feature should be deployed alongside SHUFFLE_PAGE_ALLOCATOR and
> init_on_free=1 to mitigate cross-cache attacks and page-reuse attacks as
> much as possible today.
>
> With all that, my kernel (x86 defconfig) shows me a histogram of slab
> cache object distribution per /proc/slabinfo (after boot):
>
> <slab cache> <objs> <hist>
> kmalloc-part-15 1465 ++++++++++++++
> kmalloc-part-14 2988 +++++++++++++++++++++++++++++
> kmalloc-part-13 1656 ++++++++++++++++
> kmalloc-part-12 1045 ++++++++++
> kmalloc-part-11 1697 ++++++++++++++++
> kmalloc-part-10 1489 ++++++++++++++
> kmalloc-part-09 965 +++++++++
> kmalloc-part-08 710 +++++++
> kmalloc-part-07 100 +
> kmalloc-part-06 217 ++
> kmalloc-part-05 105 +
> kmalloc-part-04 4047 ++++++++++++++++++++++++++++++++++++++++
> kmalloc-part-03 183 +
> kmalloc-part-02 283 ++
> kmalloc-part-01 316 +++
> kmalloc 1422 ++++++++++++++
>
> The above /proc/slabinfo snapshot shows me there are 6673 allocated
> objects (slabs 00 - 07) that the compiler claims contain no pointers or
> it was unable to infer the type of, and 12015 objects that contain
> pointers (slabs 08 - 15). On a whole, this looks relatively sane.
>
> Additionally, when I compile my kernel with -Rpass=alloc-token, which
> provides diagnostics where (after dead-code elimination) type inference
> failed, I see 186 allocation sites where the compiler failed to identify
> a type (down from 966 when I sent the RFC [4]). Some initial review
> confirms these are mostly variable sized buffers, but also include
> structs with trailing flexible length arrays.
>
> Link: https://clang.llvm.org/docs/AllocToken.html [1]
> Link: https://blog.dfsec.com/ios/2025/05/30/blasting-past-ios-18/ [2]
> Link: https://lwn.net/Articles/944647/ [3]
> Link: https://lore.kernel.org/all/20250825154505.1558444-1-elver@xxxxxxxxxx/ [4]
> Link: https://discourse.llvm.org/t/rfc-a-framework-for-allocator-partitioning-hints/87434
> Acked-by: GONG Ruiqi <gongruiqi1@xxxxxxxxxx>
> Co-developed-by: Harry Yoo (Oracle) <harry@xxxxxxxxxx>
> Signed-off-by: Harry Yoo (Oracle) <harry@xxxxxxxxxx>
> Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
Sashiko found 2 latent issues in the diff context :
https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
The fix is here:
https://lore.kernel.org/all/20260416132837.3787694-1-elver@xxxxxxxxxx/
The irony is that TYPED_KMALLOC_CACHES would have helped mitigate this
kind of overflow bug.