Re: memory crash

William Burrow (aa126@fan.nb.ca)
Tue, 10 Dec 1996 14:19:39 -0400 (AST)


On Tue, 10 Dec 1996, Phillip Dillinger wrote:

> You can use bash's ulimit to limit things such as core dump size, text
> memory size, etc. Like this:
>
> ulimit -d 1024
>
> will limit a single process's data segment to 1M. Look at man bash for more
> options. Also, other shells have that feature.

Which, of course, is useless for handling big jobs on the machine.

I was wondering why the VAX might be so much better at running with lots
of swap than Linux. There are a number of potential reasons, one is that
the CPU runs at nearly the same speed as the disk. Another is that the
actual amount of RAM a process is allowed to use is limited to the
working set size. This causes a lot of paging, but is not a big penalty
on a system given disk and CPU are running at near identical speeds.

However, this idea could be used in an attempt to limit that actual RAM a
process is allowed to use, but not the VM. This would limit the case
where all available RAM is consumed by a process, preventing other
processes from running (by forcing them to be swapped out; part of the
current problem). Not sure if this is a kernel related or not, I don't
know the archictecture at this level.

--
William Burrow  --  Fredericton Area Network, New Brunswick, Canada
Copyright 1996 William Burrow  
Canada's federal regulator says it may regulate content on the Internet to
provide for more Canadian content.   (Ottawa Citizen 15 Nov 96 D15)