Re: [PATCH] mm: check zone->all_unreclaimable in all_unreclaimable()

From: Minchan Kim
Date: Thu Mar 10 2011 - 19:18:30 EST


On Fri, Mar 11, 2011 at 8:58 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Thu, 10 Mar 2011 15:58:29 +0900
> Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
>
>> Hi Kame,
>>
>> Sorry for late response.
>> I had a time to test this issue shortly because these day I am very busy.
>> This issue was interesting to me.
>> So I hope taking a time for enough testing when I have a time.
>> I should find out root cause of livelock.
>>
>
> Thanks. I and Kosaki-san reproduced the bug with swapless system.
> Now, Kosaki-san is digging and found some issue with scheduler boost at OOM
> and lack of enough "wait" in vmscan.c.
>
> I myself made patch like attached one. This works well for returning TRUE at
> all_unreclaimable() but livelock(deadlock?) still happens.

I saw the deadlock.
It seems to happen by following code by my quick debug but not sure. I
need to investigate further but don't have a time now. :(


* Note: this may have a chance of deadlock if it gets
* blocked waiting for another task which itself is waiting
* for memory. Is there a better alternative?
*/
if (test_tsk_thread_flag(p, TIF_MEMDIE))
return ERR_PTR(-1UL);
It would be wait to die the task forever without another victim selection.
If it's right, It's a known BUG and we have no choice until now. Hmm.

> I wonder vmscan itself isn't a key for fixing issue.

I agree.

> Then, I'd like to wait for Kosaki-san's answer ;)

Me, too. :)

>
> I'm now wondering how to catch fork-bomb and stop it (without using cgroup).

Yes. Fork throttling without cgroup is very important.
And as off-topic, mem_notify without memcontrol you mentioned is
important to embedded people, I gues.

> I think the problem is that fork-bomb is faster than killall...

And deadlock problem I mentioned.

>
> Thanks,
> -Kame

Thanks for the investigation, Kame.

> ==
>
> This is just a debug patch.
>
> ---
> Âmm/vmscan.c | Â 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++----
> Â1 file changed, 54 insertions(+), 4 deletions(-)
>
> Index: mmotm-0303/mm/vmscan.c
> ===================================================================
> --- mmotm-0303.orig/mm/vmscan.c
> +++ mmotm-0303/mm/vmscan.c
> @@ -1983,9 +1983,55 @@ static void shrink_zones(int priority, s
> Â Â Â Â}
> Â}
>
> -static bool zone_reclaimable(struct zone *zone)
> +static bool zone_seems_empty(struct zone *zone, struct scan_control *sc)
> Â{
> - Â Â Â return zone->pages_scanned < zone_reclaimable_pages(zone) * 6;
> + Â Â Â unsigned long nr, wmark, free, isolated, lru;
> +
> + Â Â Â /*
> + Â Â Â Â* If scanned, zone->pages_scanned is incremented and this can
> + Â Â Â Â* trigger OOM.
> + Â Â Â Â*/
> + Â Â Â if (sc->nr_scanned)
> + Â Â Â Â Â Â Â return false;
> +
> + Â Â Â free = zone_page_state(zone, NR_FREE_PAGES);
> + Â Â Â isolated = zone_page_state(zone, NR_ISOLATED_FILE);
> + Â Â Â if (nr_swap_pages)
> + Â Â Â Â Â Â Â isolated += zone_page_state(zone, NR_ISOLATED_ANON);
> +
> + Â Â Â /* In we cannot do scan, don't count LRU pages. */
> + Â Â Â if (!zone->all_unreclaimable) {
> + Â Â Â Â Â Â Â lru = zone_page_state(zone, NR_ACTIVE_FILE);
> + Â Â Â Â Â Â Â lru += zone_page_state(zone, NR_INACTIVE_FILE);
> + Â Â Â Â Â Â Â if (nr_swap_pages) {
> + Â Â Â Â Â Â Â Â Â Â Â lru += zone_page_state(zone, NR_ACTIVE_ANON);
> + Â Â Â Â Â Â Â Â Â Â Â lru += zone_page_state(zone, NR_INACTIVE_ANON);
> + Â Â Â Â Â Â Â }
> + Â Â Â } else
> + Â Â Â Â Â Â Â lru = 0;
> + Â Â Â nr = free + isolated + lru;
> + Â Â Â wmark = min_wmark_pages(zone);
> + Â Â Â wmark += zone->lowmem_reserve[gfp_zone(sc->gfp_mask)];
> + Â Â Â wmark += 1 << sc->order;
> + Â Â Â printk("thread %d/%ld all %d scanned %ld pages %ld/%ld/%ld/%ld/%ld/%ld\n",
> + Â Â Â Â Â Â Â current->pid, sc->nr_scanned, zone->all_unreclaimable,
> + Â Â Â Â Â Â Â zone->pages_scanned,
> + Â Â Â Â Â Â Â nr,free,isolated,lru,
> + Â Â Â Â Â Â Â zone_reclaimable_pages(zone), wmark);
> + Â Â Â /*
> + Â Â Â Â* In some case (especially noswap), almost all page cache are paged out
> + Â Â Â Â* and we'll see the amount of reclaimable+free pages is smaller than
> + Â Â Â Â* zone->min. In this case, we canoot expect any recovery other
> + Â Â Â Â* than OOM-KILL. We can't reclaim memory enough for usual tasks.
> + Â Â Â Â*/
> +
> + Â Â Â return nr <= wmark;
> +}
> +
> +static bool zone_reclaimable(struct zone *zone, struct scan_control *sc)
> +{
> + Â Â Â /* zone_reclaimable_pages() can return 0, we need <= */
> + Â Â Â return zone->pages_scanned <= zone_reclaimable_pages(zone) * 6;
> Â}
>
> Â/*
> @@ -2006,11 +2052,15 @@ static bool all_unreclaimable(struct zon
> Â Â Â Â Â Â Â Â Â Â Â Âcontinue;
> Â Â Â Â Â Â Â Âif (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
> Â Â Â Â Â Â Â Â Â Â Â Âcontinue;
> - Â Â Â Â Â Â Â if (zone_reclaimable(zone)) {
> + Â Â Â Â Â Â Â if (zone_seems_empty(zone, sc))
> + Â Â Â Â Â Â Â Â Â Â Â continue;
> + Â Â Â Â Â Â Â if (zone_reclaimable(zone, sc)) {
> Â Â Â Â Â Â Â Â Â Â Â Âall_unreclaimable = false;
> Â Â Â Â Â Â Â Â Â Â Â Âbreak;
> Â Â Â Â Â Â Â Â}
> Â Â Â Â}
> + Â Â Â if (all_unreclaimable)
> + Â Â Â Â Â Â Â printk("all_unreclaimable() returns TRUE\n");
>
> Â Â Â Âreturn all_unreclaimable;
> Â}
> @@ -2456,7 +2506,7 @@ loop_again:
> Â Â Â Â Â Â Â Â Â Â Â Âif (zone->all_unreclaimable)
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âcontinue;
> Â Â Â Â Â Â Â Â Â Â Â Âif (!compaction && nr_slab == 0 &&
> - Â Â Â Â Â Â Â Â Â Â Â Â Â !zone_reclaimable(zone))
> + Â Â Â Â Â Â Â Â Â Â Â Â Â !zone_reclaimable(zone, &sc))
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âzone->all_unreclaimable = 1;
> Â Â Â Â Â Â Â Â Â Â Â Â/*
> Â Â Â Â Â Â Â Â Â Â Â Â * If we've done a decent amount of scanning and
>
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/