Re: [PATCH 7/7] clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable counters

From: Marc Zyngier
Date: Tue Apr 30 2019 - 11:27:56 EST


On 15/04/2019 13:16, Daniel Lezcano wrote:
> On 08/04/2019 17:49, Marc Zyngier wrote:
>> Instead of always going via arch_counter_get_cntvct_stable to
>> access the counter workaround, let's have arch_timer_read_counter
>> to point to the right method.
>>
>> For that, we need to track whether any CPU in the system has a
>> workaround for the counter. This is done by having an atomic
>> variable tracking this.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx>
>> ---
>
> [ ... ]
>
>> +
>> /*
>> * Default to cp15 based access because arm64 uses this function for
>> * sched_clock() before DT is probed and the cp15 method is guaranteed
>> @@ -372,6 +392,7 @@ static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
>> DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
>> EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
>>
>> +static atomic_t timer_unstable_counter_workaround_in_use = ATOMIC_INIT(0);
>
> Wouldn't make sense to READ_ONCE / WRITE_ONCE instead of using an atomic?

I don't think *_ONCE says anything about the atomicity of the access. It
only instruct the compiler that this should only be accessed once, and
not reloaded/rewritten. In this case, WRITE_ONCE() would work just as
well, but I feel that setting the expectations do matter.

I also had a vague idea to use this as a refcount to drop the
workarounds as CPUs get hotplugged off, in which case we'd need the RMW
operations to be atomic.

Anyway, I'm not hell bent on this. If you fundamentally disagree with me
I'll change it.

Thanks,

M.
--
Jazz is not dead. It just smells funny...