Re: [PATCH 2/5] ftrace: use code patching for ftrace graph tracer

From: Steven Rostedt
Date: Wed Nov 26 2008 - 11:47:16 EST




On Tue, 25 Nov 2008, Andrew Morton wrote:

> On Wed, 26 Nov 2008 00:16:24 -0500 Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> > From: Steven Rostedt <rostedt@xxxxxxxxxxx>
> >
> > Impact: more efficient code for ftrace graph tracer
> >
> > This patch uses the dynamic patching, when available, to patch
> > the function graph code into the kernel.
> >
> > This patch will ease the way for letting both function tracing
> > and function graph tracing run together.
> >
> > ...
> >
> > +static int ftrace_mod_jmp(unsigned long ip,
> > + int old_offset, int new_offset)
> > +{
> > + unsigned char code[MCOUNT_INSN_SIZE];
> > +
> > + if (probe_kernel_read(code, (void *)ip, MCOUNT_INSN_SIZE))
> > + return -EFAULT;
> > +
> > + if (code[0] != 0xe9 || old_offset != *(int *)(&code[1]))
>
> erk. I suspect that there's a nicer way of doing this amongst our
> forest of get_unaligned_foo() interfaces. Harvey will know.

Hmm, I may be able to make a struct out of code.

struct {
unsigned char op;
unsigned int offset;
} code __attribute__((packed));

Would that look better?

>
> > + return -EINVAL;
> > +
> > + *(int *)(&code[1]) = new_offset;
>
> Might be able to use put_unaligned_foo() here.
>
> The problem is that these functions use sizeof(*ptr) to work out what
> to do, so a cast is still needed. A get_unaligned32(ptr) would be
> nice. One which takes a void* and assumes CPU ordering.

Is there a correctness concern here? This is arch specific code, so I'm
not worried about other archs.

-- Steve

>
> > + if (do_ftrace_mod_code(ip, &code))
> > + return -EPERM;
> > +
> > + return 0;
> > +}
> > +
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/