Re: [RFC PATCH] No Reclaim LRU Infrastructure enhancement for memcgroup

From: Daisuke Nishimura
Date: Thu May 29 2008 - 07:18:22 EST


On 2008/05/29 11:30 +0900, Balbir Singh wrote:
> KOSAKI Motohiro wrote:
>> Hi
>>
>>> In my understanding, 2 checks we have to do.
>>>
>>> 1. When memcg finds PG_noreclaim page in its LRU, move it to noreclaim list of
>>> memcg.
>>> 2. When PG_noreclaim page is moved back to generic LRU, memcg should move
>>> it on its list. (we have to add a hook somewhere.)
>>>
>>> But this may break current 'loose' synchronization between global LRU and
>>> memcg's LRU. When PG_noreclaim page is put back into active/inactive LRU ?
>>>
>>> concerns are
>>> A. how to implement '2'
>> I tried to implement it today.
>> this patch is made against "[PATCH -mm 13/16] No Reclaim LRU Infrastructure"
>>
>>
>>> B. race condtions.
>> don't worry :)
>> it isn't big problem.
>>
>> global lru is reclaimbale and memcg lru is noreclaimable
>> -> we can repair at move lru of shrink_[in]active_page().
>>
>> global lru is noreclaimbale and memcg lru is reclaimable
>> -> we can repair mem_cgroup_isolate_pages()
>>
>>
>
> I've tried these patches and I still get OOM killed. I'll investigate a bit more.
>
>

Hmm... I cannot reporoduce this problem.
When a process exceeds the limit of the group, swap is used as expected.

I tested 2.6.26-rc2-mm1 + splitlru-v8 + Kosaki-san's 3 patches
+ fix for shrink_active_list(attached).

I'm afraid, have you applied rvr-07.1-kosaki-memcg-shrink_list.patch
which Kosaki-san posted to [07/16] of this thread?


Thanks,
Daisuke Nishimura.

---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d58cb5e..6f9f764 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1153,6 +1153,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
spin_unlock_irq(&zone->lru_lock);

+ pgmoved = 0;
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/