You compute the minimum at session/job startup, not try to preallocate.
All processes started after that deduct from the session/job accounting record.
This way the job accounting record always has the current maximum limits for
all processes started by the job. Any other optimizations are left up to the
implmementation.
Only a single compair per resource should be needed. Now, if the system is
oversubscribed, there may need to be more.. but only (political) management
can force oversubscription. And if they do, then they deserve the system
hang/crash that may occur. BTW, that really does happen. We had a Cray
system that had swap oversubscribed - everytime someone filled the actual swap
space the system would hang. And it kept happening until the oversubscription
was eliminated. This problem also occurs on SGI Origin systems that are
oversubscribed, but there the processes get killed - even system processes
like inetd, getty, init.. anything that attempts to use virtual memory.
>So much limits is really an unnecessary overhead.
It is "unnecessary" only if you do not have to justify the next upgrade
of a 100 system beowulf cluster (with possibly 200 CPUs, and 100 GB of
memory), or justify a larger allocation of an existing system.
Unnecessary is relative to the size of the system.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil
Any opinions expressed are solely my own.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/