Re: [RFC] different proposal for mq_notify(SIGEV_THREAD)

From: Jamie Lokier
Date: Wed Mar 10 2004 - 15:40:55 EST

Manfred Spraul wrote:
> Problem:
> - high resource usage: one fd for each pending notification.
> - complex user space.
> New proposal:
> mq_notify(SIGEV_THREAD) receives two additional parameters:
> - a 16-byte cookie.
> - a file descriptor of a special notify file. The notify file is similar
> to a pipe. The main difference is that writing to it mustn't block,
> therefore the buffer handling differs.
> If the event happens, then the kernel "writes" the cookie to the notify
> file.
> User space reads the cookie and calls the notification function.
> Problems:
> - More complexity in kernel.
> - How should the notify fd be created? Right now it's mq_notify with
> magic parameters, probably a char device in /dev is the better approach.

Wouldn't it make more sense to use epoll for this?

At the moment, async futexes use one fd per futex, and you want to
wait for multiple ones you have to use select, poll or epoll.

If you want to collect from multiple event sources through a single
fd, you can use epoll. That seems remarkeably similar to what you're
proposing for mq_notify().

The difference is that your proposal eliminates those fds.
But there is no reason that I can see why mq_notify() should be
optimised in this way and futexes not.

If you have a cookie mechanism especially for mq events, why not for
futexes, aio completions, timers, signals (especially child
terminations) and dnotify events as well?

> I think that the added complexity is not worth the effort if the notify
> fd is only used for posix message queues. Are there other users that
> could use the notify file? How is SIGEV_THREAD implemented for aio and
> timers?

Presently, futexes, aio completions, timers, signals and dnotify
events could all usefully use a notify file. Not just for

-- Jamie
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at