Re: [PATCH v7 1/3] trace: Make removal of ring buffer pages atomic

From: Steven Rostedt
Date: Mon May 07 2012 - 20:14:36 EST


On Mon, 2012-05-07 at 14:48 -0700, Vaibhav Nagarnaik wrote:

> The following seems to be the culprit. I am guessing you have a preempt
> kernel?

I'm one of the real-time Linux maintainers, what do you think ;-)

>
> @@ -3662,8 +3808,12 @@ void ring_buffer_reset_cpu(struct ring_buffer
> *buffer, int cpu)
> if (!cpumask_test_cpu(cpu, buffer->cpumask))
> return;
>
> + atomic_inc(&buffer->resize_disabled);
> atomic_inc(&cpu_buffer->record_disabled);
>
> + /* Make sure all commits have finished */
> + synchronize_sched();
> +
> raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
>
>
> I guess I can disable resizing in ring_buffer_record_disable(), that
> seems to be a reasonable assumption.
>

Looking into this, the culprit is really the __tracing_reset():

static void __tracing_reset(struct ring_buffer *buffer, int cpu)
{
ftrace_disable_cpu();
ring_buffer_reset_cpu(buffer, cpu);
ftrace_enable_cpu();
}


This function is useless. It's from the time the ring buffer was being
converted to lockless, but today it's no longer needed. The reset of the
ring buffer just needs to have the ring buffer disabled.

The bug is on my end ;-)

We can nuke the __tracing_reset() and just call ring_buffer_reset_cpu()
directly. We no longer need those "ftrace_disable/enable_cpu()s". The
comments for the ring_buffer_reset*() should state that the ring buffer
must be disabled before calling this.

Actually, your patch fixes this. As it makes the reset itself takes care
of the the ring buffer being disabled.

OK, you don't need to send another patch set (yet ;-), I'll fix the
ftrace side, and then apply your patches, and then see what else breaks.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/