Re: block: DMA alignment of IO buffer allocated from slab

From: Bart Van Assche
Date: Mon Sep 24 2018 - 11:08:34 EST


On Mon, 2018-09-24 at 17:43 +-0300, Andrey Ryabinin wrote:
+AD4
+AD4 On 09/24/2018 05:19 PM, Bart Van Assche wrote:
+AD4 +AD4 On 9/24/18 2:46 AM, Andrey Ryabinin wrote:
+AD4 +AD4 +AD4 On 09/24/2018 01:42 AM, Ming Lei wrote:
+AD4 +AD4 +AD4 +AD4 On Fri, Sep 21, 2018 at 03:04:18PM +-0200, Vitaly Kuznetsov wrote:
+AD4 +AD4 +AD4 +AD4 +AD4 Christoph Hellwig +ADw-hch+AEA-lst.de+AD4 writes:
+AD4 +AD4 +AD4 +AD4 +AD4
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4 On Wed, Sep 19, 2018 at 05:15:43PM +-0800, Ming Lei wrote:
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 1) does kmalloc-N slab guarantee to return N-byte aligned buffer? If
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4 +AD4 yes, is it a stable rule?
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4 This is the assumption in a lot of the kernel, so I think if somethings
+AD4 +AD4 +AD4 +AD4 +AD4 +AD4 breaks this we are in a lot of pain.
+AD4 +AD4 +AD4
+AD4 +AD4 +AD4 This assumption is not correct. And it's not correct at least from the beginning of the
+AD4 +AD4 +AD4 git era, which is even before SLUB allocator appeared. With CONFIG+AF8-DEBUG+AF8-SLAB+AD0-y
+AD4 +AD4 +AD4 the same as with CONFIG+AF8-SLUB+AF8-DEBUG+AF8-ON+AD0-y kmalloc return 'unaligned' objects.
+AD4 +AD4 +AD4 The guaranteed arch-and-config-independent alignment of kmalloc() result is +ACI-sizeof(void+ACo)+ACI.
+AD4
+AD4 Correction sizeof(unsigned long long), so 8-byte alignment guarantee.
+AD4
+AD4 +AD4 +AD4
+AD4 +AD4 +AD4 If objects has higher alignment requirement, the could be allocated via specifically created kmem+AF8-cache.
+AD4 +AD4
+AD4 +AD4 Hello Andrey,
+AD4 +AD4
+AD4 +AD4 The above confuses me. Can you explain to me why the following comment is present in include/linux/slab.h?
+AD4 +AD4
+AD4 +AD4 /+ACo
+AD4 +AD4 +ACo kmalloc and friends return ARCH+AF8-KMALLOC+AF8-MINALIGN aligned
+AD4 +AD4 +ACo pointers. kmem+AF8-cache+AF8-alloc and friends return ARCH+AF8-SLAB+AF8-MINALIGN
+AD4 +AD4 +ACo aligned pointers.
+AD4 +AD4 +ACo-/
+AD4 +AD4
+AD4
+AD4 ARCH+AF8-KMALLOC+AF8-MINALIGN - guaranteed alignment of the kmalloc() result.
+AD4 ARCH+AF8-SLAB+AF8-MINALIGN - guaranteed alignment of kmem+AF8-cache+AF8-alloc() result.
+AD4
+AD4 If the 'align' argument passed into kmem+AF8-cache+AF8-create() is bigger than ARCH+AF8-SLAB+AF8-MINALIGN
+AD4 than kmem+AF8-cache+AF8-alloc() from that cache should return 'align'-aligned pointers.

Hello Andrey,

Do you realize that that comment from +ADw-linux/slab.h+AD4 contradicts what you
wrote about kmalloc() if ARCH+AF8-KMALLOC+AF8-MINALIGN +AD4 sizeof(unsigned long long)?

Additionally, shouldn't CONFIG+AF8-DEBUG+AF8-SLAB+AD0-y and CONFIG+AF8-SLUB+AF8-DEBUG+AF8-ON+AD0-y
provide the same guarantees as with debugging disabled, namely that kmalloc()
buffers are aligned on ARCH+AF8-KMALLOC+AF8-MINALIGN boundaries? Since buffers
allocated with kmalloc() are often used for DMA, how otherwise is DMA assumed
to work?

Thanks,

Bart.