[PATCH 2/2] tracing/function-graph-tracer: use the more lightweight local clock

From: Frederic Weisbecker
Date: Wed Mar 04 2009 - 20:01:00 EST


Impact: decrease hangs risks with the graph tracer on slow systems

Since the function graph tracer can spend too much time on timer interrupts,
it's better now to use the more lightweight local clock. Anyway, the function graph
traces are more reliable on a per cpu trace.

Signed-off-by: Frederic Weisbecker <fweisbec@xxxxxxxxx>
---
arch/x86/kernel/ftrace.c | 2 +-
kernel/trace/trace_functions_graph.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 3925ec0..40960c2 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -436,7 +436,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
return;
}

- calltime = cpu_clock(raw_smp_processor_id());
+ calltime = sched_clock();

if (ftrace_push_return_trace(old, calltime,
self_addr, &trace.depth) == -EBUSY) {
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 2461732..c5038f4 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -112,7 +112,7 @@ unsigned long ftrace_return_to_handler(void)
unsigned long ret;

ftrace_pop_return_trace(&trace, &ret);
- trace.rettime = cpu_clock(raw_smp_processor_id());
+ trace.rettime = sched_clock();
ftrace_graph_return(&trace);

if (unlikely(!ret)) {
--
1.6.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/