Re: [PATCH] x86: eliminate redundant/contradicting cache line size config options

From: Jan Beulich
Date: Mon Nov 16 2009 - 03:08:32 EST


>>> Nick Piggin <npiggin@xxxxxxx> 16.11.09 05:14 >>>
>On Fri, Nov 13, 2009 at 11:54:40AM +0000, Jan Beulich wrote:
>> Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT (with
>> inconsistent defaults), just having the latter suffices as the former
>> can be easily calculated from it.
>>
>> To be consistent, also change X86_INTERNODE_CACHE_BYTES to
>> X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA to
>> account for last level cache line size (which here matters more than
>> L1 cache line size).
>
>I think if we're going to set it to 7 (128B, for Pentium 4), then
>we should set the L1 cache shift as well? Most alignments to
>prevent cacheline pingpong use L1 cache shift for this anyway?

But for P4 L1_CACHE_SHIFT already is 7.

>The internode thing is really just a not quite well defined thing
>because internode cachelines are really expensive and really big
>on vsmp so they warrant trading off extra space on some critical
>structures to reduce pingpong (but this is not to say that other
>structures that are *not* internode annotated do *not* need to
>worry about pingpong).

The internode one, as said in the patch description, should consider
the last level cache line size rather than L1, which 128 seems to be
a much better fit (without in introducing model dependencies like
for L1) than just using the L1 value directly.

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/