RE: [robustmutexes] Re: [RFC/PATCH] FUSYN Realtime & robust mutexes forLinux, v2.3.1

From: Perez-Gonzalez, Inaky
Date: Thu Aug 05 2004 - 13:41:55 EST


> From: Rusty Russell
>
> On Thu, 2004-08-05 at 17:37, Ulrich Drepper wrote:
> > Andrew Morton wrote:
> > > Passing the lock to a non-rt task when there's an rt-task waiting for it
> > > seems pretty poor form, too.
> >
> > No no, that's not what is wanted. Robust mutexes are a special kind of
> > mutex and not related to rt issues. Lockers of robust mutexes have to
> > register with the kernel (i.e., the locking must actually be performed
> > by the kernel) so that in case the thread goes away or the entire
> > process dies, the mutex is unlocked and other waiters (other threads, in
> > the same or other processes) can get the lock.
>
> I don't think this is neccessarily true: I think that platforms with
> 64-bit compare-and-exchange can do the whole thing in userspace. They
> would set the mutex and stamp in the thread ID simultanously, allowing
> for "dead thread" detection (ie. I didn't get the lock, and it's a
> robust mutex: check the holder is still alive).
>
> W/o 64-bit compare-and-exchange a 100% robust solution may not be
> possible though.

Exactly, this is the only weakness in the implementation. In 32 bits the
space for stamping an ID in user space that the kernel can resolve is
pretty limited to avoid PID reusage conflicts. As soon as I have some time
I want to try hashing something like an xPID with the task->pid and the
task->start_time, that should be pretty unique. In 64 bit arches that hash
should be almost unbreakable and collision chance so slim as to be negligible.

On 32 bits arches though (or on all) I want to add some more code to the
task ID code that checks if the task that was found shares any memory with
the calling task. If not, that would be a collision and would mean the owner
died previously. If yes, it could be deemed as a good positive [with still
an small chance of collision, but the hashing + this should make it truly
slim already].

As well, increasing the PID-space/actual-number-of-running-tasks ratio should
improve it. Ingo, you wrote the PID allocator--increasing the PID max
automatically means PIDs take more time to be reused or am I completely
wrong?

Final thing: when you cannot live with the slim chance of collision, you
can always flip a bit in the mutex ([pthread_mutexattr_setfast_np()] and
it will always go through the kernel. Total determinism, at the expense of
performance.

Iñaky Pérez-González -- Not speaking for Intel -- all opinions are my own (and my fault)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/