Re: race leading to held mutexes, inode_cache corruption

From: Andrew Morton
Date: Wed Apr 02 2008 - 00:28:40 EST


On Wed, 2 Apr 2008 00:13:04 -0400 "Sapan Bhatia" <sapan.bhatia@xxxxxxxxx> wrote:

> >
> >
> > That's the only way in which I can interpret your second paragraph, but as
> > far as I can tell the code cannot do that.
> >
> > Can you provide more detail?
> >
>
> On running the example again, it seems that attributing the problem to a
> generic locking bug was a misdiagnosis. I apologize for the misinformation.
> The error is more likely a path with a dangling mutex_lock somewhere, or
> something else. I'll investigate further and try to provide a more detailed
> description of the problem when I have something concrete.
>

OK, thanks.

Recent kernels have this:

config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select LOCKDEP
help
This feature will check whether any held lock (spinlock, rwlock,
mutex or rwsem) is incorrectly freed by the kernel, via any of the
memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
vfree(), etc.), whether a live lock is incorrectly reinitialized via
spin_lock_init()/mutex_init()/etc., or whether there is any lock
held during task exit.

which seems rather relevant, no? ;)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/