[GIT PULL] slab updates for 6.11

From: Vlastimil Babka
Date: Wed Jul 17 2024 - 06:49:38 EST


Hi Linus,

please pull the latest slab updates from:

git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.11

no merge conflicts with other trees expected.

Thanks,
Vlastimil

======================================

The most prominent change this time is the kmem_buckets based hardening of
kmalloc() allocations from Kees Cook. We have also extended the kmalloc()
alignment guarantees for non-power-of-two sizes in a way that benefits rust.
The rest are various cleanups and non-critical fixups.

- Dedicated bucket allocator (Kees Cook)

This series [1] enhances the probabilistic defense against heap
spraying/grooming of CONFIG_RANDOM_KMALLOC_CACHES from last year. kmalloc()
users that are known to be useful for exploits can get completely separate
set of kmalloc caches that can't be shared with other users. The first
converted users are alloc_msg() and memdup_user(). The hardening is enabled by
CONFIG_SLAB_BUCKETS.

- Extended kmalloc() alignment guarantees (Vlastimil Babka)

For years now we have guaranteed natural alignment for power-of-two
allocations, but nothing was defined for other sizes (in practice, we have
two such buckets, kmalloc-96 and kmalloc-192). To avoid unnecessary padding
in the rust layer due to its alignment rules, extend the guarantee so that
the alignment is at least the largest power-of-two divisor of the requested
size. This fits what rust needs, is a superset of the existing power-of-two
guarantee, and does not in practice change the layout (and thus does not add
overhead due to padding) of the kmalloc-96 and kmalloc-192 caches, unless slab
debugging is enabled for them.

- Cleanups and non-critical fixups (Chengming Zhou, Suren Baghdasaryan, Matthew
Willcox, Alex Shi, Vlastimil Babka)

Various tweaks related to the new alloc profiling code, folio conversion,
debugging and more leftovers after SLAB.

[1] https://lore.kernel.org/all/20240701190152.it.631-kees@xxxxxxxxxx/

----------------------------------------------------------------
Alex Shi (Tencent) (1):
mm/memcg: alignment memcg_data define condition

Chengming Zhou (3):
slab: make check_object() more consistent
slab: don't put freepointer outside of object if only orig_size
slab: delete useless RED_INACTIVE and RED_ACTIVE

Kees Cook (6):
mm/slab: Introduce kmem_buckets typedef
mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument
mm/slab: Introduce kmem_buckets_create() and family
ipc, msg: Use dedicated slab buckets for alloc_msg()
mm/util: Use dedicated slab buckets for memdup_user()

Matthew Wilcox (Oracle) (1):
mm: Reduce the number of slab->folio casts

Suren Baghdasaryan (2):
mm, slab: move allocation tagging code in the alloc path into a hook
mm, slab: move prepare_slab_obj_exts_hook under CONFIG_MEM_ALLOC_PROFILING

Vlastimil Babka (3):
mm, slab: don't wrap internal functions with alloc_hooks()
slab, rust: extend kmalloc() alignment guarantees to remove Rust padding
Merge branch 'slab/for-6.11/buckets' into slab/for-next

Documentation/core-api/memory-allocation.rst | 6 +-
include/linux/mm.h | 6 +-
include/linux/mm_types.h | 9 +-
include/linux/poison.h | 7 +-
include/linux/slab.h | 97 +++++++++----
ipc/msgutil.c | 13 +-
kernel/configs/hardening.config | 1 +
lib/fortify_kunit.c | 2 -
lib/slub_kunit.c | 2 +-
mm/Kconfig | 17 +++
mm/slab.h | 14 +-
mm/slab_common.c | 111 +++++++++++++-
mm/slub.c | 209 +++++++++++++++------------
mm/util.c | 23 ++-
rust/kernel/alloc/allocator.rs | 19 +--
scripts/kernel-doc | 1 +
tools/include/linux/poison.h | 7 +-
17 files changed, 369 insertions(+), 175 deletions(-)