Re: block: DMA alignment of IO buffer allocated from slab
From: Bart Van Assche
Date: Mon Sep 24 2018 - 12:19:50 EST
On Mon, 2018-09-24 at 19:07 +-0300, Andrey Ryabinin wrote:
+AD4 On 09/24/2018 06:58 PM, Bart Van Assche wrote:
+AD4 +AD4 On Mon, 2018-09-24 at 18:52 +-0300, Andrey Ryabinin wrote:
+AD4 +AD4 +AD4 Yes, with CONFIG+AF8-DEBUG+AF8-SLAB+AD0-y, CONFIG+AF8-SLUB+AF8-DEBUG+AF8-ON+AD0-y kmalloc() guarantees
+AD4 +AD4 +AD4 that result is aligned on ARCH+AF8-KMALLOC+AF8-MINALIGN boundary.
+AD4 +AD4
+AD4 +AD4 Had you noticed that Vitaly Kuznetsov showed that this is not the case? See
+AD4 +AD4 also https://lore.kernel.org/lkml/87h8ij0zot.fsf+AEA-vitty.brq.redhat.com/.
+AD4
+AD4 I'm not following. On x86-64 ARCH+AF8-KMALLOC+AF8-MINALIGN is 8, all pointers that
+AD4 Vitaly Kuznetsov showed are 8-byte aligned.
Hi Andrey,
That means that two buffers allocated with kmalloc() may share a cache line on
x86-64. Since it is allowed to use a buffer allocated by kmalloc() for DMA, can
this lead to data corruption, e.g. if the CPU writes into one buffer allocated
with kmalloc() and a device performs a DMA write to another kmalloc() buffer and
both write operations affect the same cache line?
Thanks,
Bart.