Re: [PATCH] tracing: add cond_resched to ftrace_replace_code()

From: Steven Rostedt
Date: Tue Dec 04 2018 - 14:19:14 EST


On Tue, 4 Dec 2018 20:12:28 +0100
Anders Roxell <anders.roxell@xxxxxxxxxx> wrote:

> When running in qemu on an kernel built with allmodconfig and debug
> options (in particular kcov and ubsan) enabled, ftrace_replace_code
> function call take minutes. The ftrace selftest calls
> ftrace_replace_code to look >40000 through
> ftrace_make_call/ftrace_make_nop, and these end up calling
> __aarch64_insn_write/aarch64_insn_patch_text_nosync.
>
> Microseconds add up because this is called in a loop for each dyn_ftrace
> record, and this triggers the softlockup watchdog unless we let it sleep
> occasionally.
>
> Rework so that we call cond_resched() if !irqs_disabled() && !preempt_count().
>
> Suggested-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
> Signed-off-by: Anders Roxell <anders.roxell@xxxxxxxxxx>
> ---
> kernel/trace/ftrace.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index c375e33239f7..582e3441e318 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -2419,11 +2419,14 @@ void __weak ftrace_replace_code(int enable)
> {
> struct dyn_ftrace *rec;
> struct ftrace_page *pg;
> + bool schedulable;
> int failed;
>
> if (unlikely(ftrace_disabled))
> return;
>
> + schedulable = !irqs_disabled() && !preempt_count();
> +
> do_for_each_ftrace_rec(pg, rec) {
>
> if (rec->flags & FTRACE_FL_DISABLED)
> @@ -2435,6 +2438,13 @@ void __weak ftrace_replace_code(int enable)
> /* Stop processing */
> return;
> }
> + /*
> + * Some archs calls this function with interrupts or preemption
> + * disabled. Howeve, other archs don't and this can cause a
> + * tremendous unneeded latency.
> + */

Actually, could you move the comment up where schedulable gets set?

Thanks!

-- Steve

> + if (schedulable)
> + cond_resched();
> } while_for_each_ftrace_rec();
> }
>