Re: [PATCH] trace: Set oom_score_adj to maximum for ring bufferallocating process

From: David Rientjes
Date: Thu May 26 2011 - 16:33:56 EST


On Thu, 26 May 2011, Vaibhav Nagarnaik wrote:

> > Hmm, have you tried this in practice? Yes we may kill the "echo" command
> > but it doesn't stop the ring buffer from being allocated, and thus
> > killing the echo command may not be enough, and those critical processes
> > that you are trying to protect will be killed next.
> >
>
> Yes I did try this and found that it works as we intend it to. When
> oom-killer is invoked, it picks the process which has lowest
> oom_score_adj and kills it or one of its children.

s/lowest/highest/

> When the process is
> getting killed, any memory allocation from it would be returned -ENOMEM,
> which gets handled in our allocation process and we free up previously
> allocated memory.
>

Not sure that's true, this is allocating with kzalloc_node(GFP_KERNEL),
correct? If current is oom killed, it will have access to all memory
reserves which will increase the liklihood that the allocation will
succeed before handling the SIGKILL.

> This API is now being used in other parts of kernel too, where it knows
> that the allocation could cause OOM.
>

What's wrong with using __GFP_NORETRY to avoid oom killing entirely and
then failing the ring buffer memory allocation? Seems like a better
solution than relying on the oom killer, since there may be other threads
with a max oom_score_adj as well that would appear in the tasklist first
and get killed unnecessarily. Is there some ring buffer code that can't
handle failing allocations appropriately?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/