Re: [RFC][PATCH v3 0/10] memcg async reclaim

From: KAMEZAWA Hiroyuki
Date: Thu May 26 2011 - 22:56:23 EST


On Thu, 26 May 2011 18:49:26 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Wed, May 25, 2011 at 10:10 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >
> > It's now merge window...I just dump my patch queue to hear other's idea.
> > I wonder I should wait until dirty_ratio for memcg is queued to mmotm...
> > I'll be busy with LinuxCon Japan etc...in the next week.
> >
> > This patch is onto mmotm-May-11 + some patches queued in mmotm, as numa_stat.
> >
> > This is a patch for memcg to keep margin to the limit in background.
> > By keeping some margin to the limit in background, application can
> > avoid foreground memory reclaim at charge() and this will help latency.
> >
> > Main changes from v2 is.
> > Â- use SCHED_IDLE.
> > Â- removed most of heuristic codes. Now, code is very simple.
> >
> > By using SCHED_IDLE, async memory reclaim can only consume 0.3%? of cpu
> > if the system is truely busy but can use much CPU if the cpu is idle.
> > Because my purpose is for reducing latency without affecting other running
> > applications, SCHED_IDLE fits this work.
> >
> > If application need to stop by some I/O or event, background memory reclaim
> > will cull memory while the system is idle.
> >
> > Perforemce:
> > ÂRunning an httpd (apache) under 300M limit. And access 600MB working set
> > Âwith normalized distribution access by apatch-bench.
> > Âapatch bench's concurrency was 4 and did 40960 accesses.
> >
> > Without async reclaim:
> > Connection Times (ms)
> >       Âmin Âmean[+/-sd] median  max
> > Connect: Â Â Â Â0 Â Â0 Â 0.0 Â Â Â0 Â Â Â 2
> > Processing: Â Â30 Â 37 Â28.3 Â Â 32 Â Â1793
> > Waiting: Â Â Â 28 Â 35 Â25.5 Â Â 31 Â Â1792
> > Total: Â Â Â Â 30 Â 37 Â28.4 Â Â 32 Â Â1793
> >
> > Percentage of the requests served within a certain time (ms)
> > Â50% Â Â 32
> > Â66% Â Â 32
> > Â75% Â Â 33
> > Â80% Â Â 34
> > Â90% Â Â 39
> > Â95% Â Â 60
> > Â98% Â Â100
> > Â99% Â Â133
> > Â100% Â 1793 (longest request)
> >
> > With async reclaim:
> > Connection Times (ms)
> >       Âmin Âmean[+/-sd] median  max
> > Connect: Â Â Â Â0 Â Â0 Â 0.0 Â Â Â0 Â Â Â 2
> > Processing: Â Â30 Â 35 Â12.3 Â Â 32 Â Â 678
> > Waiting: Â Â Â 28 Â 34 Â12.0 Â Â 31 Â Â 658
> > Total: Â Â Â Â 30 Â 35 Â12.3 Â Â 32 Â Â 678
> >
> > Percentage of the requests served within a certain time (ms)
> > Â50% Â Â 32
> > Â66% Â Â 32
> > Â75% Â Â 33
> > Â80% Â Â 34
> > Â90% Â Â 39
> > Â95% Â Â 49
> > Â98% Â Â 71
> > Â99% Â Â 86
> > Â100% Â Â678 (longest request)
> >
> >
> > It seems latency is stabilized by hiding memory reclaim.
> >
> > The score for memory reclaim was following.
> > See patch 10 for meaning of each member.
> >
> > == without async reclaim ==
> > recent_scan_success_ratio 44
> > limit_scan_pages 388463
> > limit_freed_pages 162238
> > limit_elapsed_ns 13852159231
> > soft_scan_pages 0
> > soft_freed_pages 0
> > soft_elapsed_ns 0
> > margin_scan_pages 0
> > margin_freed_pages 0
> > margin_elapsed_ns 0
> >
> > == with async reclaim ==
> > recent_scan_success_ratio 6
> > limit_scan_pages 0
> > limit_freed_pages 0
> > limit_elapsed_ns 0
> > soft_scan_pages 0
> > soft_freed_pages 0
> > soft_elapsed_ns 0
> > margin_scan_pages 1295556
> > margin_freed_pages 122450
> > margin_elapsed_ns 644881521
> >
> >
> > For this case, SCHED_IDLE workqueue can reclaim enough memory to the httpd.
> >
> > I may need to dig why scan_success_ratio is far different in the both case.
> > I guess the difference of epalsed_ns is because several threads enter
> > memory reclaim when async reclaim doesn't run. But may not...
> >
>
>
> Hmm.. I noticed a very strange behavior on a simple test w/ the patch set.
>
> Test:
> I created a 4g memcg and start doing cat. Then the memcg being OOM
> killed as soon as it reaches its hard_limit. We shouldn't hit OOM even
> w/o async-reclaim.
>
> Again, I will read through the patch. But like to post the test result first.
>
> $ echo $$ >/dev/cgroup/memory/A/tasks
> $ cat /dev/cgroup/memory/A/memory.limit_in_bytes
> 4294967296
>
> $ time cat /export/hdc3/dd_A/tf0 > /dev/zero
> Killed
>
> real 0m53.565s
> user 0m0.061s
> sys 0m4.814s
>

Hmm, what I see is
==
root@bluextal kamezawa]# ls -l test/1G
-rw-rw-r--. 1 kamezawa kamezawa 1053261824 May 13 13:58 test/1G
[root@bluextal kamezawa]# mkdir /cgroup/memory/A
[root@bluextal kamezawa]# echo 0 > /cgroup/memory/A/tasks
[root@bluextal kamezawa]# echo 300M > /cgroup/memory/A/memory.limit_in_bytes
[root@bluextal kamezawa]# echo 1 > /cgroup/memory/A/memory.async_control
[root@bluextal kamezawa]# cat test/1G > /dev/null
[root@bluextal kamezawa]# cat /cgroup/memory/A/memory.reclaim_stat
recent_scan_success_ratio 83
limit_scan_pages 82
limit_freed_pages 49
limit_elapsed_ns 242507
soft_scan_pages 0
soft_freed_pages 0
soft_elapsed_ns 0
margin_scan_pages 218630
margin_freed_pages 181598
margin_elapsed_ns 117466604
[root@bluextal kamezawa]#
==

I'll turn off swapaccount and try again.

Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/