On Wed, 15 Mar 2000, Paul Jakma wrote:
> On Tue, 14 Mar 2000, Rik van Riel wrote:
> > Indeed. There is always a way to fool any OOM killer.
> > The only thing that could save us here are proper
> > per-user resource limits,
> so isn't what should be concentrated on? Per user limits are a cleaner
> way of fixing the problem.
They would AVOID the problem in a lot of cases - but not always. Yes, we
should have per-user resource limits. Yes, they would help prevent
malloc-bombs from being effective. No, they will not *prevent* OOM
> i guess though it's a 2.5 thing. Rik, do you know of anyone
> investigating per-user limits for 2.5? (a lot of linux users would
> like to see it.. hint hint..)
> > but even then we'd still want
> > an emergency OOM killer to rescue the system in situations
> > where we hit the wall...
> OOM is a stopgap. Ideally we should be able to set ironclad policies so
> that we never encounter OOM.
I doubt that could really be achieved. Dynamically reducing user rlimits
to try to prevent them overloading the system would ALMOST achieve this -
but what if a root process blows up? What if a couple of users all hit
their resource limits at once? While individually they may not have high
enough resource limits to OOM the box, a group of users would still be
We can, of course, make OOM *almost* impossible. Per-user rlimits, plus a
first-line userspace daemon (to prevent true OOM situations arising) will
make this almost impossible - but never completely.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:30 EST