Re: [RFC][PATCH v3 0/10] memcg async reclaim

From: KAMEZAWA Hiroyuki
Date: Thu May 26 2011 - 23:12:38 EST


On Fri, 27 May 2011 11:48:37 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> On Thu, 26 May 2011 18:49:26 -0700
> Ying Han <yinghan@xxxxxxxxxx> wrote:

> > Hmm.. I noticed a very strange behavior on a simple test w/ the patch set.
> >
> > Test:
> > I created a 4g memcg and start doing cat. Then the memcg being OOM
> > killed as soon as it reaches its hard_limit. We shouldn't hit OOM even
> > w/o async-reclaim.
> >
> > Again, I will read through the patch. But like to post the test result first.
> >
> > $ echo $$ >/dev/cgroup/memory/A/tasks
> > $ cat /dev/cgroup/memory/A/memory.limit_in_bytes
> > 4294967296
> >
> > $ time cat /export/hdc3/dd_A/tf0 > /dev/zero
> > Killed
> >
> > real 0m53.565s
> > user 0m0.061s
> > sys 0m4.814s
> >
>
> Hmm, what I see is
> ==
> root@bluextal kamezawa]# ls -l test/1G
> -rw-rw-r--. 1 kamezawa kamezawa 1053261824 May 13 13:58 test/1G
> [root@bluextal kamezawa]# mkdir /cgroup/memory/A
> [root@bluextal kamezawa]# echo 0 > /cgroup/memory/A/tasks
> [root@bluextal kamezawa]# echo 300M > /cgroup/memory/A/memory.limit_in_bytes
> [root@bluextal kamezawa]# echo 1 > /cgroup/memory/A/memory.async_control
> [root@bluextal kamezawa]# cat test/1G > /dev/null
> [root@bluextal kamezawa]# cat /cgroup/memory/A/memory.reclaim_stat
> recent_scan_success_ratio 83
> limit_scan_pages 82
> limit_freed_pages 49
> limit_elapsed_ns 242507
> soft_scan_pages 0
> soft_freed_pages 0
> soft_elapsed_ns 0
> margin_scan_pages 218630
> margin_freed_pages 181598
> margin_elapsed_ns 117466604
> [root@bluextal kamezawa]#
> ==
>
> I'll turn off swapaccount and try again.
>

A bug found....I added memory.async_control file to memsw.....file set by mistake.
Then, async_control cannot be enabled when swapaccount=0. I'll fix that.

So, how do you enabled async_control ?

Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/