Re: [PATCH v2 2/2] mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)
From: Vlastimil Babka
Date: Wed Sep 25 2019 - 03:17:54 EST
On 9/25/19 1:54 AM, Andrew Morton wrote:
> On Tue, 24 Sep 2019 20:52:52 +0000 (UTC) cl@xxxxxxxxx wrote:
>
>> On Mon, 23 Sep 2019, David Sterba wrote:
>>
>>> As a user of the allocator interface in filesystem, I'd like to see a
>>> more generic way to address the alignment guarantees so we don't have to
>>> apply workarounds like 3acd48507dc43eeeb each time we find that we
>>> missed something. (Where 'missed' might be another sort of weird memory
>>> corruption hard to trigger.)
>>
>> The alignment guarantees are clearly documented and objects are misaligned
>> in debugging kernels.
>>
>> Looking at 3acd48507dc43eeeb:Looks like no one tested that patch with a
>> debug kernel or full debugging on until it hit mainline. Not good.
>>
>> The consequence for the lack of proper testing is to make the production
>> kernel contain the debug measures?
>
> This isn't a debug measure - it's making the interface do that which
> people evidently expect it to do. Minor point.
Yes, detecting issues due to misalignment is one thing, but then there
are the workarounds necessary to achieve it (for multiple sizes, so no
single kmem_cache_create(..., alignment)), as XFS folks demonstrated.
> I agree it's a bit regrettable to do this but it does appear that the
> change will make the kernel overall a better place given the reality of
> kernel development.
Thanks.
> Given this, have you reviewed the patch for overall implementation
> correctness?
>
> I'm wondering if we can avoid at least some of the patch's overhead if
> slab debugging is disabled - the allocators are already returning
> suitably aligned memory, so why add the new code in that case?
Most of the new code is for SLOB, which has no debugging and yet
misaligns. For SLUB and SLAB, it's just passing alignment argument to
kmem_cache_create() for kmalloc caches, which means just extra few
instructions during boot, and no extra code during kmalloc/kfree itself.