[RFC][PATCH 0/9] memcg soft limit v2 (new design)

From: KAMEZAWA Hiroyuki
Date: Fri Apr 03 2009 - 04:10:39 EST


Hi,

Memory cgroup's soft limit feature is a feature to tell global LRU
"please reclaim from this memcg at memory shortage".

This is v2. Fixed some troubles under hierarchy. and increase soft limit
update hooks to proper places.

This patch is on to
mmotom-Mar23 + memcg-cleanup-cache_charge.patch
+ vmscan-fix-it-to-take-care-of-nodemask.patch

So, not for wide use ;)

This patch tries to avoid to use existing memcg's reclaim routine and
just tell "Hints" to global LRU. This patch is briefly tested and shows
good result to me. (But may not to you. plz brame me.)

Major characteristic is.
- memcg will be inserted to softlimit-queue at charge() if usage excess
soft limit.
- softlimit-queue is a queue with priority. priority is detemined by size
of excessing usage.
- memcg's soft limit hooks is called by shrink_xxx_list() to show hints.
- Behavior is affected by vm.swappiness and LRU scan rate is determined by
global LRU's status.

In this v2.
- problems under use_hierarchy=1 case are fixed.
- more hooks are added.
- codes are cleaned up.

Shows good results on my private box test under several work loads.

But in special artificial case, when victim memcg's Active/Inactive ratio of
ANON is very different from global LRU, the result seems not very good.
i.e.
under vicitm memcg, ACTIVE_ANON=100%, INACTIVE=0% (access memory in busy loop)
under global, ACTIVE_ANON=10%, INACTIVE=90% (almost all processes are sleeping.)
memory can be swapped out from global LRU, not from vicitm.
(If there are file cache in victims, file cacahes will be out.)

But, in this case, even if we successfully swap out anon pages under victime memcg,
they will come back to memory soon and can show heavy slashing.

While using soft limit, I felt this is useful feature :)
But keep this RFC for a while. I'll prepare Documentation until the next post.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/