Re: [PATCH] Breaking down the global IPC locks - 2.5.31

From: Hugh Dickins (
Date: Wed Aug 21 2002 - 11:51:26 EST

On Tue, 20 Aug 2002, mingming cao wrote:
> >
> > This patch breaks the three global IPC locks into one lock per IPC ID.
> > By doing so it could reduce possible lock contention in workloads which
> > make heavy use of IPC semaphores, message queues and Shared
> > memories...etc.
> Here is the patch again. Fixed a typo. *_^

Looks good to me...

Except last time around I persuaded you that ipc_lockall, ipc_unlockall
(shm_lockall, shm_unlockall) needed updating; and now I think that they
were redundant all along and can just be removed completely. Only used
by SHM_INFO, I'd expected you to make them read_locks: surprised to find
write_locks, thought about it some more, realized you would need to use
write_locks - except that the down(&shm_ids.sem) is already protecting
against everything that the write_lock would protect against (the values
reported, concurrent freeing of an entry, concurrent reallocation of the
entries array). If you agree, please just delete all ipc_lockall
ipc_unlockall shm_lockall shm_unlockall lines. Sorry for not
noticing that earlier.

You convinced me that it's not worth trying to remove the ipc_ids.sems,
but I'm still slightly worried that you add another layer of locking.
There's going to be no contention over those read_locks (the write_lock
only taken when reallocating entries array), but their cachelines will
still bounce around. I don't know if contention or bouncing was the
more important effect before, but if bouncing then these changes may
be disappointing in practice. Performance results (or an experienced
voice, I've little experience of such tradeoffs) would help dispel doubt.
Perhaps, if ReadCopyUpdate support is added into the kernel in future,
RCU could be used here instead of rwlocking?

Nit: I'd prefer "= RW_LOCK_UNLOCKED" instead of "=RW_LOCK_UNLOCKED".


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Fri Aug 23 2002 - 22:00:24 EST