Re: [PATCH v1] trace: Fix race in trace_open and buffer resize call

From: Denis Efremov
Date: Thu Jan 21 2021 - 09:35:39 EST


Hi,

This patch (CVE-2020-27825) was tagged with
Fixes: b23d7a5f4a07a ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU")

I'm not an expert here but it seems like b23d7a5f4a07a only refactored
ring_buffer_reset_cpu() by introducing reset_disabled_cpu_buffer() without
significant changes. Hence, mutex_lock(&buffer->mutex)/mutex_unlock(&buffer->mutex)
can be backported further than b23d7a5f4a07a~ and to all LTS kernels. Is
b23d7a5f4a07a the actual cause of the bug?

Thanks,
Denis

On 10/6/20 12:33 PM, Gaurav Kohli wrote:
> Below race can come, if trace_open and resize of
> cpu buffer is running parallely on different cpus
> CPUX CPUY
> ring_buffer_resize
> atomic_read(&buffer->resize_disabled)
> tracing_open
> tracing_reset_online_cpus
> ring_buffer_reset_cpu
> rb_reset_cpu
> rb_update_pages
> remove/insert pages
> resetting pointer
>
> This race can cause data abort or some times infinte loop in
> rb_remove_pages and rb_insert_pages while checking pages
> for sanity.
>
> Take buffer lock to fix this.
>
> Signed-off-by: Gaurav Kohli <gkohli@xxxxxxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> ---
> Changes since v0:
> -Addressed Steven's review comments.
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 93ef0ab..15bf28b 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -4866,6 +4866,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
> if (!cpumask_test_cpu(cpu, buffer->cpumask))
> return;
>
> + /* prevent another thread from changing buffer sizes */
> + mutex_lock(&buffer->mutex);
> +
> atomic_inc(&cpu_buffer->resize_disabled);
> atomic_inc(&cpu_buffer->record_disabled);
>
> @@ -4876,6 +4879,8 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
>
> atomic_dec(&cpu_buffer->record_disabled);
> atomic_dec(&cpu_buffer->resize_disabled);
> +
> + mutex_unlock(&buffer->mutex);
> }
> EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
>
> @@ -4889,6 +4894,9 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
> struct ring_buffer_per_cpu *cpu_buffer;
> int cpu;
>
> + /* prevent another thread from changing buffer sizes */
> + mutex_lock(&buffer->mutex);
> +
> for_each_online_buffer_cpu(buffer, cpu) {
> cpu_buffer = buffer->buffers[cpu];
>
> @@ -4907,6 +4915,8 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
> atomic_dec(&cpu_buffer->record_disabled);
> atomic_dec(&cpu_buffer->resize_disabled);
> }
> +
> + mutex_unlock(&buffer->mutex);
> }
>
> /**
>