[PATCH 4.18 29/34] x86/tsc: Force inlining of cyc2ns bits

From: Greg Kroah-Hartman
Date: Thu Nov 08 2018 - 17:10:31 EST


4.18-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>

commit 4907c68abd3f60f650f98d5a69d4ec77c0bde44f upstream.

Looking at the asm for native_sched_clock() I noticed we don't inline
enough. Mostly caused by sharing code with cyc2ns_read_begin(), which
we didn't used to do. So mark all that __force_inline to make it DTRT.

Fixes: 59eaef78bfea ("x86/tsc: Remodel cyc2ns to use seqcount_latch()")
Reported-by: Eric Dumazet <edumazet@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: hpa@xxxxxxxxx
Cc: eric.dumazet@xxxxxxxxx
Cc: bp@xxxxxxxxx
Cc: stable@xxxxxxxxxxxxxxx
Link: https://lkml.kernel.org/r/20181011104019.695196158@xxxxxxxxxxxxx
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
arch/x86/kernel/tsc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -60,7 +60,7 @@ struct cyc2ns {

static DEFINE_PER_CPU_ALIGNED(struct cyc2ns, cyc2ns);

-void cyc2ns_read_begin(struct cyc2ns_data *data)
+void __always_inline cyc2ns_read_begin(struct cyc2ns_data *data)
{
int seq, idx;

@@ -77,7 +77,7 @@ void cyc2ns_read_begin(struct cyc2ns_dat
} while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence)));
}

-void cyc2ns_read_end(void)
+void __always_inline cyc2ns_read_end(void)
{
preempt_enable_notrace();
}
@@ -123,7 +123,7 @@ static void __init cyc2ns_init(int cpu)
seqcount_init(&c2n->seq);
}

-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
+static __always_inline unsigned long long cycles_2_ns(unsigned long long cyc)
{
struct cyc2ns_data data;
unsigned long long ns;