The overhead is not performance overhead -- it's
development overhead. Threads offer two interrelated
RAD gains:
-- They free the developer from being forced to decide
exactly _how_ to save state (they essentially give you
"closures", as Nils points out). You don't have to
codify your states.
-- They free the developer from being forced to decide
the time granularity of work that should be performed
in response to a given event in order to achieve
"fairness".
People who dismiss threaded development often ignore these
gains (and tend to exaggerate the difficulty of threaded
debugging). Not every app needs to be a hand-tuned
ultra-scalable performance machine.
Unfortunately, this convenience is seductive. It tempts
application authors to pursue a straightforward threaded
model when scalability demands merit a more hand-tuned
approach. This malady apparently infected the Java
designers as well.
In terms of Linux kernel threads, let me add that I'd
love to have a CLONE_SIGGRP or whatever would allow
Posix thread signal semantics to be cleanly implemented.
Not because I'm a slave to Posix, but because I've got an
embedded OS emulator that runs under Linux and which does
preemptive userspace context switching. Posix signal
delivery semantics would have saved me from having to
worry about certain race conditions. But this is hardly
a typical usage case...
miket
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/