Re: [PATCH v4.16-rc5 2/3] x86/vdso: on Intel, VDSO should handle CLOCK_MONOTONIC_RAW

From: Thomas Gleixner
Date: Wed Mar 14 2018 - 10:48:17 EST


On Wed, 14 Mar 2018, jason.vas.dias@xxxxxxxxx wrote:

Again: Read and comply with Documentation/process/ and fix the complaints
of checkpatch.pl.

> diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
> index fbc7371..2c46675 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -184,10 +184,9 @@ notrace static u64 vread_tsc(void)
>
> notrace static u64 vread_tsc_raw(void)
> {
> - u64 tsc
> + u64 tsc = (gtod->has_rdtscp ? rdtscp((void*)0) : rdtsc_ordered())
> , last = gtod->raw_cycle_last;

Aside of the totally broken coding style including usage of (void*)0 :

Did you ever benchmark rdtscp() against rdtsc_ordered()?

If so, then the results want to be documented in the changelog and this
change only makes sense when rdtscp() is actually faster.

Please document how you measured that so others can actually run the same
tests and make their own judgement.

If it would turn out that rdtscp() is faster, which I doubt, then the
conditional is the wrong way to do that. It wants to be patched at boot
time which completely avoids conditionals.

Thanks,

tglx