Re: [PATCH v6 1/5] slab: Introduce kmalloc_obj() and family
From: Vlastimil Babka
Date: Thu Jan 08 2026 - 09:43:06 EST
On 12/4/25 00:30, Kees Cook wrote:
> Introduce type-aware kmalloc-family helpers to replace the common
> idioms for single object and arrays of objects allocation:
>
> ptr = kmalloc(sizeof(*ptr), gfp);
> ptr = kmalloc(sizeof(struct some_obj_name), gfp);
> ptr = kzalloc(sizeof(*ptr), gfp);
> ptr = kmalloc_array(count, sizeof(*ptr), gfp);
> ptr = kcalloc(count, sizeof(*ptr), gfp);
>
> These become, respectively:
>
> ptr = kmalloc_obj(*ptr, gfp);
> ptr = kmalloc_obj(*ptr, gfp);
> ptr = kzalloc_obj(*ptr, gfp);
> ptr = kmalloc_objs(*ptr, count, gfp);
> ptr = kzalloc_objs(*ptr, count, gfp);
>
> Beyond the other benefits outlined below, the primary ergonomic benefit
> is the elimination of needing "sizeof" nor the type name, and the
> enforcement of assignment types (they do not return "void *", but rather
> a pointer to the type of the first argument). The type name _can_ be
> used, though, in the case where an assignment is indirect (e.g. via
> "return"). This additionally allows[1] variables to be declared via
> __auto_type:
>
> __auto_type ptr = kmalloc_obj(struct foo, gfp);
>
> Internal introspection of the allocated type now becomes possible,
> allowing for future alignment-aware choices to be made by the allocator
> and future hardening work that can be type sensitive. For example,
> adding __alignof(*ptr) as an argument to the internal allocators so that
> appropriate/efficient alignment choices can be made, or being able to
> correctly choose per-allocation offset randomization within a bucket
> that does not break alignment requirements.
>
> Link: https://lore.kernel.org/all/CAHk-=wiCOTW5UftUrAnvJkr6769D29tF7Of79gUjdQHS_TkF5A@xxxxxxxxxxxxxx/ [1]
> Signed-off-by: Kees Cook <kees@xxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
How do you plan to handle this series? Given minimal slab changes (just
wrappers) but there being also changes elsewhere, want to use your hardening
tree? I wouldn't mind.
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -12,6 +12,7 @@
> #ifndef _LINUX_SLAB_H
> #define _LINUX_SLAB_H
>
> +#include <linux/bug.h>
> #include <linux/cache.h>
> #include <linux/gfp.h>
> #include <linux/overflow.h>
> @@ -965,6 +966,63 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
> void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node);
> #define kmalloc_nolock(...) alloc_hooks(kmalloc_nolock_noprof(__VA_ARGS__))
>
> +/**
> + * __alloc_objs - Allocate objects of a given type using
> + * @KMALLOC: which size-based kmalloc wrapper to allocate with.
> + * @GFP: GFP flags for the allocation.
> + * @TYPE: type to allocate space for.
> + * @COUNT: how many @TYPE objects to allocate.
> + *
> + * Returns: Newly allocated pointer to (first) @TYPE of @COUNT-many
> + * allocated @TYPE objects, or NULL on failure.
> + */
> +#define __alloc_objs(KMALLOC, GFP, TYPE, COUNT) \
> +({ \
> + const size_t __obj_size = size_mul(sizeof(TYPE), COUNT); \
I assume with the hardcoded 1 for COUNT, this size_mul() will be eliminated
by the compiler and not add unnecessary runtime overhead? Otherwise we
should have two core #define variants.
I also noted that the existing kmalloc_array() and kvmalloc_array() do
check_mul_overflow() and return NULL silently on overflow. This AFAIU will
make SIZE_MAX passed to the underlying kmalloc/kvmalloc and thus will cause
a warning. That's IMHO a good thing.
> + (TYPE *)KMALLOC(__obj_size, GFP); \
> +})
> +
> +/**
> + * kmalloc_obj - Allocate a single instance of the given type
> + * @VAR_OR_TYPE: Variable or type to allocate.
> + * @GFP: GFP flags for the allocation.
> + *
> + * Returns: newly allocated pointer to a @VAR_OR_TYPE on success, or NULL
> + * on failure.
> + */
> +#define kmalloc_obj(VAR_OR_TYPE, GFP) \
> + __alloc_objs(kmalloc, GFP, typeof(VAR_OR_TYPE), 1)
> +
> +/**
> + * kmalloc_objs - Allocate an array of the given type
> + * @VAR_OR_TYPE: Variable or type to allocate an array of.
> + * @COUNT: How many elements in the array.
> + * @FLAGS: GFP flags for the allocation.
> + *
> + * Returns: newly allocated pointer to array of @VAR_OR_TYPE on success,
> + * or NULL on failure.
> + */
> +#define kmalloc_objs(VAR_OR_TYPE, COUNT, GFP) \
> + __alloc_objs(kmalloc, GFP, typeof(VAR_OR_TYPE), COUNT)
> +
> +/* All kzalloc aliases for kmalloc_(obj|objs|flex). */
> +#define kzalloc_obj(P, GFP) \
> + __alloc_objs(kzalloc, GFP, typeof(P), 1)
> +#define kzalloc_objs(P, COUNT, GFP) \
> + __alloc_objs(kzalloc, GFP, typeof(P), COUNT)
> +
> +/* All kvmalloc aliases for kmalloc_(obj|objs|flex). */
> +#define kvmalloc_obj(P, GFP) \
> + __alloc_objs(kvmalloc, GFP, typeof(P), 1)
> +#define kvmalloc_objs(P, COUNT, GFP) \
> + __alloc_objs(kvmalloc, GFP, typeof(P), COUNT)
> +
> +/* All kvzalloc aliases for kmalloc_(obj|objs|flex). */
> +#define kvzalloc_obj(P, GFP) \
> + __alloc_objs(kvzalloc, GFP, typeof(P), 1)
> +#define kvzalloc_objs(P, COUNT, GFP) \
> + __alloc_objs(kvzalloc, GFP, typeof(P), COUNT)
> +
> #define kmem_buckets_alloc(_b, _size, _flags) \
> alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
>