Re: [PATCH 4/5] add isolate pages vmstat

From: Minchan Kim
Date: Mon Jul 06 2009 - 21:48:25 EST


It looks good to me.
Thanks for your effort. I added my review sign. :)

Let remain one side note.
This accounting feature results from direct reclaim bomb.
If we prevent direct reclaim bomb, I think this feature can be removed.

As I know, Rik or Wu is making patch for throttling direct reclaim.

On Tue, 7 Jul 2009 10:19:53 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> wrote:

> > > > Index: b/mm/vmscan.c
> > > > ===================================================================
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -1082,6 +1082,7 @@ static unsigned long shrink_inactive_lis
> > > > -count[LRU_ACTIVE_ANON]);
> > > > __mod_zone_page_state(zone, NR_INACTIVE_ANON,
> > > > -count[LRU_INACTIVE_ANON]);
> > > > + __mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
> > >
> > > Lumpy can reclaim file + anon anywhere.
> > > How about using count[NR_LRU_LISTS]?
> >
> > Ah yes, good catch.
>
> Fixed.
>
> Subject: [PATCH] add isolate pages vmstat
>
> If the system have plenty threads or processes, concurrent reclaim can
> isolate very much pages.
> Unfortunately, current /proc/meminfo and OOM log can't show it.
>
> This patch provide the way of showing this information.
>
>
> reproduce way
> -----------------------
> % ./hackbench 140 process 1000
> => couse OOM
>
> Active_anon:146 active_file:41 inactive_anon:0
> inactive_file:0 unevictable:0
> isolated_anon:49245 isolated_file:113
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> dirty:0 writeback:0 buffer:49 unstable:0
> free:184 slab_reclaimable:276 slab_unreclaimable:5492
> mapped:87 pagetables:28239 bounce:0
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>
Reviewed-by: Minchan Kim <minchan.kim@xxxxxxxxx>

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/