Re: [PATCH RFC] x86: avoid atomic operation in test_and_set_bit_lockif possible

From: Ingo Molnar
Date: Thu Mar 24 2011 - 04:57:12 EST



* Nikanth Karthikesan <knikanth@xxxxxxx> wrote:

> On x86_64 SMP with lots of CPU atomic instructions which assert the LOCK #
> signal can stall other CPUs. And as the number of cores increase this penalty
> scales proportionately. So it is best to try and avoid atomic instructions
> wherever possible. test_and_set_bit_lock() can avoid using LOCK_PREFIX if it
> finds the bit set already.
>
> Signed-off-by: Nikanth Karthikesan <knikanth@xxxxxxx>
>
> ---
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index 903683b..26a42ff 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -203,19 +203,6 @@ static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
> }
>
> /**
> - * test_and_set_bit_lock - Set a bit and return its old value for lock
> - * @nr: Bit to set
> - * @addr: Address to count from
> - *
> - * This is the same as test_and_set_bit on x86.
> - */
> -static __always_inline int
> -test_and_set_bit_lock(int nr, volatile unsigned long *addr)
> -{
> - return test_and_set_bit(nr, addr);
> -}
> -
> -/**
> * __test_and_set_bit - Set a bit and return its old value
> * @nr: Bit to set
> * @addr: Address to count from
> @@ -339,6 +326,25 @@ static int test_bit(int nr, const volatile unsigned long *addr);
> : variable_test_bit((nr), (addr)))
>
> /**
> + * test_and_set_bit_lock - Set a bit and return its old value for lock
> + * @nr: Bit to set
> + * @addr: Address to count from
> + *
> + * This is the same as test_and_set_bit on x86. But atomic operation is
> + * avoided, if the bit was already set.
> + */
> +static __always_inline int
> +test_and_set_bit_lock(int nr, volatile unsigned long *addr)
> +{
> +#ifdef CONFIG_SMP
> + barrier();
> + if (test_bit(nr, addr))
> + return 1;
> +#endif
> + return test_and_set_bit(nr, addr);
> +}

On modern x86 CPUs there's no "#LOCK signal" anymore - it's replaced by a
M[O]ESI cache coherency bus. I'd expect modern x86 CPUs to be pretty fast when
the cacheline is local and the bit is set already.

So you really need to back up your patch with actual hard numbers. Putting this
code into user-space and using pthreads to loop on the same global variable and
testing the before/after effect would be sufficient i think. You can use 'perf
stat --repeat 10' kind of measurements to see whether there's any improvement
larger than the noise of the measurement.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/