Re: [PATCH 1/4] mm: Move ARCH_SLAB_MINALIGN and ARCH_KMALLOC_MINALIGNto <linux/slab_def.h>

From: Manfred Spraul
Date: Wed May 19 2010 - 14:01:32 EST


On 05/19/2010 03:30 PM, Johannes Stezenbach wrote:
Hi,

I have some comments/questions, I hope it's not too silly:

On Wed, May 19, 2010 at 12:01:42PM +0100, David Woodhouse wrote:
+#ifndef ARCH_KMALLOC_MINALIGN
+/*
+ * Enforce a minimum alignment for the kmalloc caches.
+ * Usually, the kmalloc caches are cache_line_size() aligned, except when
+ * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
+ * Some archs want to perform DMA into kmalloc caches and need a guaranteed
+ * alignment larger than the alignment of a 64-bit integer.
+ * ARCH_KMALLOC_MINALIGN allows that.
+ * Note that increasing this value may disable some debug features.
+ */
+#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
+#endif
I think the comment is confusing. IIRC kmalloc() API guarantees that
the allocated buffer is suitable for DMA, so if cache coherence is not
handled by hardware the arch might need to set this to the cache line size,
and that's what ARCH_KMALLOC_MINALIGN is about. Nothing else.
Is this text better?

/*
* Enforce a minimum alignment for the kmalloc caches.
* kmalloc allocations are guaranteed to be BYTES_PER_WORD aligned - sizeof(void*)
* If an arch needs a larger guarantee (e.g. cache_line_size due to DMA), then it
* must use ARCH_KMALLOC_MINALIGN to enforce that.
* Note: Do not set ARCH_KMALLOC_MINALIGN for performance reasons.
* Unless debug options are enabled, the kernel uses cache_line_size() automatically.
*/

+#ifndef ARCH_SLAB_MINALIGN
+/*
+ * Enforce a minimum alignment for all caches.
+ * Intended for archs that get misalignment faults even for BYTES_PER_WORD
+ * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
+ * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
+ * some debug features.
+ */
+#define ARCH_SLAB_MINALIGN 0
+#endif
Why is this needed at all? If code calls kmem_cache_create()
with wrong align parameter, or has wrong expectations wrt kmalloc()
alignment guarantees, this code needs to be fixed?
I mean, portable code cannot assume that unaligned accesses work?
ARM uses 8 bytes. I don't know why.
A 32-bit arch and unaligned 64-bit loads are not supported?

--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/