Re: [PATCH 2/2] tracing - fix recursive user stack trace

From: Steven Rostedt
Date: Thu Nov 11 2010 - 16:57:13 EST


On Thu, 2010-11-11 at 08:13 +0800, Li Zefan wrote:
> Jiri Olsa wrote:
> > The user stack trace can fault when examining the trace. Which
> > would call the do_page_fault handler, which would trace again,
> > which would do the user stack trace, which would fault and call
> > do_page_fault again ...
> >
> > Thus this is causing a recursive bug. We need to have a recursion
> > detector here.
> >
>
> I guess this is from what I reported to Redhat, triggered by
> the ftrace stress test. ;)
>
> This patch should be the first patch, otherwise you introduce
> a regression. Though it merely a problem in this case, better
> avoid it.

Yeah, this should go into urgent, and the other patch can be queued for
38.


>
> A nitpick below:
>
> >
> > Signed-off-by: Steven Rostedt <srostedt@xxxxxxxxxx>
> > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
> > ---
> > kernel/trace/trace.c | 19 +++++++++++++++++++
> > 1 files changed, 19 insertions(+), 0 deletions(-)
> >
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 82d9b81..0215e87 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -1284,6 +1284,8 @@ void trace_dump_stack(void)
> > __ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count());
> > }
> >
> > +static DEFINE_PER_CPU(int, user_stack_count);
> > +
> > void
> > ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
> > {
> > @@ -1302,6 +1304,18 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
> > if (unlikely(in_nmi()))
> > return;
> >
> > + /*
> > + * prevent recursion, since the user stack tracing may
> > + * trigger other kernel events.
> > + */
> > + preempt_disable();
> > + if (__get_cpu_var(user_stack_count))
> > + goto out;
> > +
> > + __get_cpu_var(user_stack_count)++;
> > +
> > +
> > +
>
> redundant blank lines.

I can pull this patch with the fix.

Thanks!

-- Steve

>
> > event = trace_buffer_lock_reserve(buffer, TRACE_USER_STACK,
> > sizeof(*entry), flags, pc);
> > if (!event)
> > @@ -1319,6 +1333,11 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
> > save_stack_trace_user(&trace);
> > if (!filter_check_discard(call, entry, buffer, event))
> > ring_buffer_unlock_commit(buffer, event);
> > +
> > + __get_cpu_var(user_stack_count)--;
> > +
> > + out:
> > + preempt_enable();
> > }
> >
> > #ifdef UNUSED


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/