[PATCH 1/2] oprofile: fix race condition in event_buffer free

From: Robert Richter
Date: Fri Oct 09 2009 - 15:49:41 EST


From: David Rientjes <rientjes@xxxxxxxxxx>

Looking at the 2.6.31-rc9 code, it appears there is a race condition
in the event_buffer cleanup code path (shutdown). This could lead to
kernel panic as some CPUs may be operating on the event buffer AFTER
it has been freed. The attached patch solves the problem and makes
sure CPUs check if the buffer is not NULL before they access it as
some may have been spinning on the mutex while the buffer was being
freed.

The race may happen if the buffer is freed during pending reads. But
it is not clear why there are races in add_event_entry() since all
workqueues or handlers are canceled or flushed before the event buffer
is freed.

Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
Signed-off-by: Stephane Eranian <eranian@xxxxxxxxxx>
Signed-off-by: Robert Richter <robert.richter@xxxxxxx>
---
drivers/oprofile/event_buffer.c | 14 +++++++++++++-
1 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/drivers/oprofile/event_buffer.c b/drivers/oprofile/event_buffer.c
index 2b7ae36..c38adb3 100644
--- a/drivers/oprofile/event_buffer.c
+++ b/drivers/oprofile/event_buffer.c
@@ -41,6 +41,12 @@ static atomic_t buffer_ready = ATOMIC_INIT(0);
*/
void add_event_entry(unsigned long value)
{
+ /*
+ * catch potential error
+ */
+ if (!event_buffer)
+ return;
+
if (buffer_pos == buffer_size) {
atomic_inc(&oprofile_stats.event_lost_overflow);
return;
@@ -92,9 +98,10 @@ out:

void free_event_buffer(void)
{
+ mutex_lock(&buffer_mutex);
vfree(event_buffer);
-
event_buffer = NULL;
+ mutex_unlock(&buffer_mutex);
}


@@ -167,6 +174,11 @@ static ssize_t event_buffer_read(struct file *file, char __user *buf,

mutex_lock(&buffer_mutex);

+ if (!event_buffer) {
+ retval = -EINTR;
+ goto out;
+ }
+
atomic_set(&buffer_ready, 0);

retval = -EFAULT;
--
1.6.5.rc2


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/