Re: Some questions about linux kernel.

From: Jesse Pollard (pollard@tomcat.admin.navo.hpc.mil)
Date: Fri Mar 17 2000 - 08:24:54 EST


Paul Jakma <paul.jakma@compaq.com>
> On Thu, 16 Mar 2000, James Sutherland wrote:
>
> > No. *ANY* memory allocation system can run out of memory. Avoiding
> > "overcommitting" would make the OOM situation arise SOONER (and more
> > frequently), as well as killing performance.
> >
>
> well, it's a more a question of whether you make promises that you might
> not be able to keep. If you do (ie overcommit) then it's your
> (kernel) problem. If you don't, it's not.
>
> Without overcommit the /system/ can run out memory, of course - it's
> finite - but it's no longer a kernel problem.
>
> > Right... Now we'll try this on the university's central Unix system, shall
> > we? Let's see... 6000 users, 2Gb RAM+swap. They get about 300K each.
> > That's ALMOST enough to log in with!
>
> well then get more ram/swap. But at least it has become a hw issue.

It's still not right - the assumption that you have 6000 concurrent users
on a 2GB based system is unreasonable. It is much more likely that there
is only 40-50 conccurent users. If the limit is 40 then it becomes 2GB/40
or ~50MB per user not 300K. After that what you have is a management decision
on system expansion, AND the data to back that decision. I have worked in
such an environment, and that's the way it is.

>
> > The alternative just isn't practical in any real-world situation, except
> > more-or-less single user boxes.
>
> the alternative is very practical. boxes where oracle runs as one user /
> apache as another for example. You might want to lock down something
> that you know can cause problems.. etc..
>
> > Even then, that single user will still run
> > out of memory - it's just his quota, rather than a physical limit.
> >
>
> obviously.
>
> > So his processes STILL end up dying randomly, but they do it sooner rather
> > than later. Hrm. Wow.
> >
>
> but then it's the users problem - not the OS..
>
> > On any realistically specced box, it is almost zero already.
> >
>
> no it's not. a plain linux box today has a terrible tendency to die if
> an app misbehaves wrt to memory. It's that bad. (unless you think 1GB of
> RAM + 4GB's+ of swap is a reasonable spec for a desktop)
>
> per process limits go a long long way to solving this (easy with
> pam_limits) but not far enough. But per user limits would be a huge
> improvement...
>
> > No, it's a risk with *EVERY* OS.
>
> no it's not. A non overcommiting OS doesn't run out of VM for
> processes. it cleanly grants or denies memory requests. What the app
> does after that is not a kernel problem.

I agree that is is not a problem with the os.
It can also prevent additional logins if the resources for the new login
are not available to the specific user attempting to log in. This is done
in a LOT of UNIX systems, as long as they have functioning, per user,
resource controls.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:24 EST