[PATCH v6 0/4] mm/slub: some debug enhancements for kmalloc

From: Feng Tang
Date: Tue Sep 13 2022 - 02:54:56 EST


kmalloc's API family is critical for mm, and one of its nature is that
it will round up the request size to a fixed one (mostly power of 2).
When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
could be allocated, so in worst case, there is around 50% memory space
waste.

The wastage is not a big issue for requests that get allocated/freed
quickly, but may cause problems with objects that have longer life time,
and there were some OOM cases in some extrem cases.

This patchset(4/4) tries to :
* Add a debug method to track each kmalloced object's wastage info,
and show the call stack of original allocation (depends on
SLAB_STORE_USER flag) (Patch 1)

* Extend the redzone sanity check to the extra kmalloced buffer than
requested, to better detect un-legitimate access to it. (depends
on SLAB_STORE_USER & SLAB_RED_ZONE) (Patch 2/3/4, while 2/3 are
preparation patches)

The redzone part has been tested with code below:

for (shift = 3; shift <= 12; shift++) {
size = 1 << shift;
buf = kmalloc(size + 4, GFP_KERNEL);
/* We have 96, 196 kmalloc size, which is not power of 2 */
if (size == 64 || size == 128)
oob_size = 16;
else
oob_size = size - 4;
memset(buf + size + 4, 0xee, oob_size);
kfree(buf);
}

Please help to review, thanks!

- Feng

---
Changelogs:

since v5:
* Refine code/comments and add more perf info in commit log for
kzalloc change (Hyeonggoon Yoo)
* change the kasan param name and refine comments about
kasan+redzone handling (Andrey Konovalov)
* put free pointer in meta data to make redzone check cover all
kmalloc objects (Hyeonggoon Yoo)

since v4:
* fix a race issue in v3, by moving kmalloc debug init into
alloc_debug_processing (Hyeonggon Yoo)
* add 'partial_conext' for better parameter passing in get_partial()
call chain (Vlastimil Babka)
* update 'slub.rst' for 'alloc_traces' part (Hyeonggon Yoo)
* update code comments for 'orig_size'

since v3:
* rebase against latest post 6.0-rc1 slab tree's 'for-next' branch
* fix a bug reported by 0Day, that kmalloc-redzoned data and kasan's
free meta data overlaps in the same kmalloc object data area

since v2:
* rebase against slab tree's 'for-next' branch
* fix pointer handling (Kefeng Wang)
* move kzalloc zeroing handling change to a separate patch (Vlastimil Babka)
* make 'orig_size' only depend on KMALLOC & STORE_USER flag
bits (Vlastimil Babka)

since v1:
* limit the 'orig_size' to kmalloc objects only, and save
it after track in metadata (Vlastimil Babka)
* fix a offset calculation problem in print_trailer

since RFC:
* fix problems in kmem_cache_alloc_bulk() and records sorting,
improve the print format (Hyeonggon Yoo)
* fix a compiling issue found by 0Day bot
* update the commit log based info from iova developers

Feng Tang (4):
mm/slub: enable debugging memory wasting of kmalloc
mm/slub: only zero the requested size of buffer for kzalloc
mm: kasan: Add free_meta size info in struct kasan_cache
mm/slub: extend redzone check to extra allocated kmalloc space than
requested

Documentation/mm/slub.rst | 33 +++---
include/linux/kasan.h | 2 +
include/linux/slab.h | 2 +
mm/kasan/common.c | 2 +
mm/slab.c | 7 +-
mm/slab.h | 9 +-
mm/slab_common.c | 4 +
mm/slub.c | 217 ++++++++++++++++++++++++++++++--------
8 files changed, 214 insertions(+), 62 deletions(-)

--
2.34.1