Re: [PATCH] slub: fix data loss and overflow in krealloc()
From: Vlastimil Babka (SUSE)
Date: Fri Apr 17 2026 - 05:16:23 EST
On 4/16/26 15:25, Marco Elver wrote:
> Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
> k[v]realloc") introduced the ability to force a reallocation if the
> original object does not satisfy new alignment or NUMA node, even when
> the object is being shrunk.
>
> This introduced two bugs in the reallocation fallback path:
>
> 1. Data loss during NUMA migration: The jump to 'alloc_new' happens
> before 'ks' and 'orig_size' are initialized. As a result, the
> memcpy() in the 'alloc_new' block would copy 0 bytes into the new
> allocation.
>
> 2. Buffer overflow during shrinking: When shrinking an object while
> forcing a new alignment, 'new_size' is smaller than the old size.
> However, the memcpy() used the old size ('orig_size ?: ks'), leading
> to an out-of-bounds write.
>
> The same overflow bug exists in the kvrealloc() fallback path, where the
> old bucket size ksize(p) is copied into the new buffer without being
> bounded by the new size.
>
> A simple reproducer:
>
> // e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
> while (1) {
> void *p = kmalloc(128, GFP_KERNEL);
> p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
> kfree(p);
> }
>
> demonstrates the issue:
>
> ==================================================================
> BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130
>
> Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
> memcpy_orig+0x68/0x130
> krealloc_node_align_noprof+0x1c8/0x340
> lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> lkdtm_do_action+0x3a/0x60 [lkdtm]
> ...
>
> kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64
>
> allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
> krealloc_node_align_noprof+0x19c/0x340
> lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> lkdtm_do_action+0x3a/0x60 [lkdtm]
> ...
> ==================================================================
>
> Fix it by moving the old size calculation to the top of __do_krealloc()
> and bounding all copy lengths by the new allocation size.
>
> Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
> Cc: <stable@xxxxxxxxxxxxxxx>
> Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
> Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
Ouch, thanks. Added to slab/for-next-fixes
Indeed the vrealloc would be separate patch with different Fixes: commit and
handled in the mm tree.
> ---
> mm/slub.c | 24 ++++++++++++------------
> 1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 92362eeb13e5..161079ac5ba1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6645,16 +6645,6 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
> if (!kasan_check_byte(p))
> return NULL;
>
> - /*
> - * If reallocation is not necessary (e. g. the new size is less
> - * than the current allocated size), the current allocation will be
> - * preserved unless __GFP_THISNODE is set. In the latter case a new
> - * allocation on the requested node will be attempted.
> - */
> - if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> - nid != page_to_nid(virt_to_page(p)))
> - goto alloc_new;
> -
> if (is_kfence_address(p)) {
> ks = orig_size = kfence_ksize(p);
> } else {
> @@ -6673,6 +6663,16 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
> }
> }
>
> + /*
> + * If reallocation is not necessary (e. g. the new size is less
> + * than the current allocated size), the current allocation will be
> + * preserved unless __GFP_THISNODE is set. In the latter case a new
> + * allocation on the requested node will be attempted.
> + */
> + if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> + nid != page_to_nid(virt_to_page(p)))
> + goto alloc_new;
> +
> /* If the old object doesn't fit, allocate a bigger one */
> if (new_size > ks)
> goto alloc_new;
> @@ -6707,7 +6707,7 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
> if (ret && p) {
> /* Disable KASAN checks as the object's redzone is accessed. */
> kasan_disable_current();
> - memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
> + memcpy(ret, kasan_reset_tag(p), min(new_size, (size_t)(orig_size ?: ks)));
> kasan_enable_current();
> }
>
> @@ -6941,7 +6941,7 @@ void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long alig
> if (p) {
> /* We already know that `p` is not a vmalloc address. */
> kasan_disable_current();
> - memcpy(n, kasan_reset_tag(p), ksize(p));
> + memcpy(n, kasan_reset_tag(p), min(size, ksize(p)));
> kasan_enable_current();
>
> kfree(p);