Re: [PATCH v2 1/2] mm: vmalloc: implement vrealloc()
From: Vlastimil Babka
Date: Mon Sep 02 2024 - 04:57:38 EST
On 9/2/24 09:04, Feng Tang wrote:
> On Mon, Sep 02, 2024 at 09:36:26AM +0800, Tang, Feng wrote:
>> On Tue, Jul 30, 2024 at 08:15:34PM +0800, Vlastimil Babka wrote:
>> > On 7/30/24 3:35 AM, Danilo Krummrich wrote:
> [...]
>> >
>> > Let's say we kmalloc(56, __GFP_ZERO), we get an object from kmalloc-64
>> > cache. Since commit 946fa0dbf2d89 ("mm/slub: extend redzone check to
>> > extra allocated kmalloc space than requested") and preceding commits, if
>> > slub_debug is enabled (red zoning or user tracking), only the 56 bytes
>> > will be zeroed. The rest will be either unknown garbage, or redzone.
>>
>> Yes.
>>
>> >
>> > Then we might e.g. krealloc(120) and get a kmalloc-128 object and 64
>> > bytes (result of ksize()) will be copied, including the garbage/redzone.
>> > I think it's fixable because when we do this in slub_debug, we also
>> > store the original size in the metadata, so we could read it back and
>> > adjust how many bytes are copied.
>>
>> krealloc() --> __do_krealloc() --> ksize()
>> When ksize() is called, as we don't know what user will do with the
>> extra space ([57, 64] here), the orig_size check will be unset by
>> __ksize() calling skip_orig_size_check().
>>
>> And if the newsize is bigger than the old 'ksize', the 'orig_size'
>> will be correctly set for the newly allocated kmalloc object.
Yes, but the memcpy() to the new object will be done using ksize() thus
include the redzone, e.g. [57, 64]
>> For the 'unstable' branch of -mm tree, which has all latest patches
>> from Danilo, I run some basic test and it seems to be fine.
To test it would not always be enough to expect some slub_debug to fail,
you'd e.g. have to kmalloc(48, GFP_KERNEL | GFP_ZERO), krealloc(128,
GFP_KERNEL | GFP_ZERO) and then verify there are zeroes from 48 to 128. I
suspect there won't be zeroes from 48 to 64 due to redzone.
(this would have made a great lib/slub_kunit.c test :))
> when doing more test, I found one case matching Vlastimil's previous
> concern, that if we kzalloc a small object, and then krealloc with
> a slightly bigger size which can still reuse the kmalloc object,
> some redzone will be preserved.
>
> With test code like:
>
> buf = kzalloc(36, GFP_KERNEL);
> memset(buf, 0xff, 36);
>
> buf = krealloc(buf, 48, GFP_KERNEL | __GFP_ZERO);
>
> Data after kzalloc+memset :
>
> ffff88802189b040: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ffff88802189b050: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ffff88802189b060: ff ff ff ff cc cc cc cc cc cc cc cc cc cc cc cc
> ffff88802189b070: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
>
> Data after krealloc:
>
> ffff88802189b040: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ffff88802189b050: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> ffff88802189b060: ff ff ff ff cc cc cc cc cc cc cc cc cc cc cc cc
> ffff88802189b070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>
> If we really want to make [37, 48] to be zeroed too, we can lift the
> get_orig_size() from slub.c to slab_common.c and use it as the start
> of zeroing in krealloc().
Or maybe just move krealloc() to mm/slub.c so there are no unnecessary calls
between the files.
We should also set a new orig_size in cases we are shrinking or enlarging
within same object (i.e. 48->40 or 48->64). In case of shrinking, we also
might need to redzone the shrinked area (i.e. [40, 48]) or later checks will
fail. But if the current object is from kfence, then probably not do any of
this... sigh this gets complicated. And really we need kunit tests for all
the scenarios :/
> Thanks,
> Feng