Re: Possible ways of dealing with OOM conditions.

From: Evgeniy Polyakov
Date: Fri Jan 19 2007 - 17:57:35 EST


On Fri, Jan 19, 2007 at 01:53:15PM +0100, Peter Zijlstra (a.p.zijlstra@xxxxxxxxx) wrote:
> > 2. You differentiate by hand between critical and non-critical
> > allocations by specifying some kernel users as potentially possible to
> > allocate from reserve.
>
> True, all sockets that are needed for swap, no-one else.
>
> > This does not prevent from NVIDIA module to
> > allocate from that reserve too, does it?
>
> All users of the NVidiot crap deserve all the pain they get.
> If it breaks they get to keep both pieces.

I meant that pretty anyone can be those user, who can just add a bit
into own gfp_flags which are used for allocation.

> > And you artificially limit
> > system to process only tiny bits of what it must do, thus potentially
> > leaking pathes which must use reserve too.
>
> How so? I cover pretty much every allocation needed to process an skb by
> setting PF_MEMALLOC - the only drawback there is that the reserve might
> not actually be large enough because it covers more allocations that
> were considered. (thats one of the TODO items, validate the reserve
> functions parameters)

You only covered ipv4/v6 and arp, maybe some route updates.
But it is very possible, that some allocations are missed like
multicast/broadcast. Selecting only special pathes out of the whole
possible network alocations tends to create a situation, when something
is missed or cross dependant on other pathes.

> > So, solution is to have a reserve in advance, and manage it using
> > special path when system is in OOM. So you will have network memory
> > reserve, which will be used when system is in trouble. It is very
> > similar to what you had.
> >
> > But the whole reserve can never be used at all, so it should be used,
> > but not by those who can create OOM condition, thus it should be
> > exported to, for example, network only, and when system is in trouble,
> > network would be still functional (although only critical pathes).
>
> But the network can create OOM conditions for itself just fine.
>
> Consider the remote storage disappearing for a while (it got rebooted,
> someone tripped over the wire etc..). Now the rest of the network
> traffic keeps coming and will queue up - because user-space is stalled,
> waiting for more memory - and we run out of memory.

Hmm... Neither UDP, nor TCP work that way actually.

> There must be a point where we start dropping packets that are not
> critical to the survival of the machine.

You still can drop them, the main point is that network allocations do
not depend on other allocations.

> > Even further development of such idea is to prevent such OOM condition
> > at all - by starting swapping early (but wisely) and reduce memory
> > usage.
>
> These just postpone execution but will not avoid it.

No. If system allows to have such a condition, then
something is broken. It must be prevented, instead of creating special
hacks to recover from it.

--
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/