Re: [PATCH] ring-buffer: Add set/clear_current_oom_origin() during allocations

From: Joel Fernandes
Date: Wed Apr 04 2018 - 19:59:29 EST


Hi Steve,

On Wed, Apr 4, 2018 at 9:18 AM, Joel Fernandes <joelaf@xxxxxxxxxx> wrote:
> On Wed, Apr 4, 2018 at 9:13 AM, Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
> [..]
>>>
>>> Also, I agree with the new patch and its nice idea to do that.
>>
>> Thanks, want to give it a test too?

With the latest tree and the below diff, I can still OOM-kill a victim
process doing a large buffer_size_kb write:

I pulled your ftrace/core and added this:
+ /*
i = si_mem_available();
if (i < nr_pages)
return -ENOMEM;
+ */

Here's a run in Qemu with 4-cores 1GB total memory:

bash-4.3# ./m -m 1M &
[1] 1056
bash-4.3#
bash-4.3#
bash-4.3#
bash-4.3# echo 10000000 > /d/tracing/buffer_size_kb
[ 33.213988] Out of memory: Kill process 1042 (bash) score
1712050900 or sacrifice child
[ 33.215349] Killed process 1056 (m) total-vm:9220kB,
anon-rss:7564kB, file-rss:4kB, shmem-rss:640kB
bash: echo: write error: Cannot allocate memory
[1]+ Killed ./m -m 1M
bash-4.3#
--

As you can see, OOM killer triggers and kills "m" which is my busy
memory allocator (it allocates and frees lots of memory and does that
in a loop)

Here's the m program, sorry if it looks too ugly:
https://pastebin.com/raw/aG6Qw37Z

Happy to try anything else, BTW when the si_mem_available check
enabled, this doesn't happen and the buffer_size_kb write fails
normally without hurting anything else.

- Joel