Re: [PATCH 2/2] ftrace: release functions from hash

From: Steven Rostedt
Date: Tue Oct 21 2008 - 14:28:16 EST




Ingo,

I found a bug with this patch. Do not incorporate it. I'll come up
with a new patch to replace this one.

Thanks,

-- Steve

On Tue, 21 Oct 2008, Steven Rostedt wrote:

> The x86 architecture uses a static recording of mcount caller locations
> and is not affected by this patch.
>
> For architectures still using the dynamic ftrace daemon, this patch is
> critical. It removes the race between the recording of a function that
> calls mcount, the unloading of a module, and the ftrace daemon updating
> the call sites.
>
> This patch adds the releasing of the hash functions that the daemon uses
> to update the mcount call sites. When a module is unloaded, not only
> are the replaced call site table update, but now so is the hash recorded
> functions that the ftrace daemon will use.
>
> Again, architectures that implement MCOUNT_RECORD are not affected by
> this (which currently only x86 has).
>
> Signed-off-by: Steven Rostedt <srostedt@xxxxxxxxxx>
> ---
> kernel/trace/ftrace.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 44 insertions(+)
>
> Index: linux-compile.git/kernel/trace/ftrace.c
> ===================================================================
> --- linux-compile.git.orig/kernel/trace/ftrace.c 2008-10-21 09:34:51.000000000 -0400
> +++ linux-compile.git/kernel/trace/ftrace.c 2008-10-21 10:31:24.000000000 -0400
> @@ -164,10 +164,14 @@ static DEFINE_SPINLOCK(ftrace_hash_lock)
> #define ftrace_hash_lock(flags) spin_lock_irqsave(&ftrace_hash_lock, flags)
> #define ftrace_hash_unlock(flags) \
> spin_unlock_irqrestore(&ftrace_hash_lock, flags)
> +static void ftrace_release_hash(unsigned long start, unsigned long end);
> #else
> /* This is protected via the ftrace_lock with MCOUNT_RECORD. */
> #define ftrace_hash_lock(flags) do { (void)(flags); } while (0)
> #define ftrace_hash_unlock(flags) do { } while(0)
> +static inline void ftrace_release_hash(unsigned long start, unsigned long end)
> +{
> +}
> #endif
>
> /*
> @@ -354,6 +358,8 @@ void ftrace_release(void *start, unsigne
> }
> }
> spin_unlock(&ftrace_lock);
> +
> + ftrace_release_hash(s, e);
> }
>
> static struct dyn_ftrace *ftrace_alloc_dyn_node(unsigned long ip)
> @@ -1686,6 +1692,44 @@ void __init ftrace_init(void)
> ftrace_disabled = 1;
> }
> #else /* CONFIG_FTRACE_MCOUNT_RECORD */
> +
> +static void ftrace_release_hash(unsigned long start, unsigned long end)
> +{
> + struct dyn_ftrace *rec;
> + struct hlist_node *t, *n;
> + struct hlist_head *head, temp_list;
> + unsigned long flags;
> + int i, cpu;
> +
> + preempt_disable_notrace();
> +
> + /* disable incase we call something that calls mcount */
> + cpu = raw_smp_processor_id();
> + per_cpu(ftrace_shutdown_disable_cpu, cpu)++;
> +
> + ftrace_hash_lock(flags);
> +
> + for (i = 0; i < FTRACE_HASHSIZE; i++) {
> + INIT_HLIST_HEAD(&temp_list);
> + head = &ftrace_hash[i];
> +
> + /* all CPUS are stopped, we are safe to modify code */
> + hlist_for_each_entry_safe(rec, t, n, head, node) {
> + if (rec->flags & FTRACE_FL_FREE)
> + continue;
> +
> + if ((rec->ip >= start) && (rec->ip < end))
> + ftrace_free_rec(rec);
> + }
> + }
> +
> + ftrace_hash_unlock(flags);
> +
> + per_cpu(ftrace_shutdown_disable_cpu, cpu)--;
> + preempt_enable_notrace();
> +
> +}
> +
> static int ftraced(void *ignore)
> {
> unsigned long usecs;
>
> --
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/