Re: How to increat [sic.] max open files?

Baldur Norddahl (bbn@dark.x.dtu.dk)
Fri, 3 Jan 1997 19:20:05 +0100 (MET)


On Fri, 3 Jan 1997, Richard B. Johnson wrote:

> The answer to the question cannot be "yes". Just because "that's the way
> it's presently done", does not qualify as "efficient", correct, or anything
> else.

> A properly designed server does not need to communicate between anything
> except the Client that it serves and the "database" that it accesses on
> behalf of the Client. Record locking maintains "database" integity. The
> quoted database means "any shared resource".

In a mud your database is changing constantly as the virtual world is
simulated real time. This happens purely in memory, so only the main
process itself or a thread can access it. Last time a looked the
mainstream linux packages (and other unix'es, muds are generaly made
multiplatform) didn't support threads. So you are really stuck with a
single process dealing with all the connections itself.

Another point is, that even if you think there is a more efficient way to
do this, then all existing software is written for the old princip. If
linux wants to support big server applications that works fine on other
unix'es, it have to support that a single process opens hundreds of
sockets.

> Now, it is not efficient to kick-off a separate child to handle each
> connection. It is also not efficient to have a single task handle everything.
> There is some design necessary to figure out what goes in between.

In the mud case it is actually efficient to keep everything in a single
process. At least that is what the profilers say. It is the simulation of
the virtual world that hogs the CPU, not the client handling. Having to
lock the objects in the virtual world would just complicate the simulation
and thereby make it slower.

What teoratical background do you have for concluding that using a single
process to handle the clients is NEVER efficient?

Baldur