Re: [net-next-2.6 PATCH 2/2] x86: Align skb w/ start of cache lineon newer core 2/Xeon Arch

From: Eric Dumazet
Date: Wed Jun 02 2010 - 18:44:31 EST


Le mercredi 02 juin 2010 Ã 15:25 -0700, Jeff Kirsher a Ãcrit :
> From: Alexander Duyck <alexander.h.duyck@xxxxxxxxx>
>
> x86 architectures can handle unaligned accesses in hardware, and it has
> been shown that unaligned DMA accesses can be expensive on Nehalem
> architectures. As such we should overwrite NET_IP_ALIGN and NET_SKB_PAD
> to resolve this issue.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxx>
> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@xxxxxxxxx>
> ---
>
> arch/x86/include/asm/system.h | 12 ++++++++++++
> 1 files changed, 12 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/include/asm/system.h b/arch/x86/include/asm/system.h
> index b8fe48e..8acb44e 100644
> --- a/arch/x86/include/asm/system.h
> +++ b/arch/x86/include/asm/system.h
> @@ -457,4 +457,16 @@ static inline void rdtsc_barrier(void)
> alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
> }
>
> +#ifdef CONFIG_MCORE2
> +/*
> + * We handle most unaligned accesses in hardware. On the other hand
> + * unaligned DMA can be quite expensive on some Nehalem processors.
> + *
> + * Based on this we disable the IP header alignment in network drivers.
> + * We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
> + * cacheline alignment of buffers.
> + */
> +#define NET_IP_ALIGN 0
> +#define NET_SKB_PAD L1_CACHE_BYTES
> +#endif
> #endif /* _ASM_X86_SYSTEM_H */
>
> --

But... L1_CACHE_BYTES is 64 on MCORE2, so this matches current
NET_SKB_PAD definition...

#ifndef NET_SKB_PAD
#define NET_SKB_PAD 64
#endif



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/