Re: block: DMA alignment of IO buffer allocated from slab

From: Andrey Ryabinin
Date: Mon Sep 24 2018 - 05:46:12 EST


On 09/24/2018 01:42 AM, Ming Lei wrote:
> On Fri, Sep 21, 2018 at 03:04:18PM +0200, Vitaly Kuznetsov wrote:
>> Christoph Hellwig <hch@xxxxxx> writes:
>>
>>> On Wed, Sep 19, 2018 at 05:15:43PM +0800, Ming Lei wrote:
>>>> 1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If
>>>> yes, is it a stable rule?
>>>
>>> This is the assumption in a lot of the kernel, so I think if somethings
>>> breaks this we are in a lot of pain.

This assumption is not correct. And it's not correct at least from the beginning of the
git era, which is even before SLUB allocator appeared. With CONFIG_DEBUG_SLAB=y
the same as with CONFIG_SLUB_DEBUG_ON=y kmalloc return 'unaligned' objects.
The guaranteed arch-and-config-independent alignment of kmalloc() result is "sizeof(void*)".

If objects has higher alignment requirement, the could be allocated via specifically created kmem_cache.


>
> Even some of buffer address is _not_ L1 cache size aligned, this way is
> totally broken wrt. DMA to/from this buffer.
>
> So this issue has to be fixed in slab debug side.
>

Well, this definitely would increase memory consumption. Many (probably most) of the kmalloc()
users doesn't need such alignment, why should they pay the cost?