Re: [PATCH 1/7] memcg: sc.nr_to_reclaim should be initialized

From: KOSAKI Motohiro
Date: Fri Jul 23 2010 - 03:33:24 EST


> * KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> [2010-07-16 19:13:31]:
>
> > Currently, mem_cgroup_shrink_node_zone() initialize sc.nr_to_reclaim as 0.
> > It mean shrink_zone() only scan 32 pages and immediately return even if
> > it doesn't reclaim any pages.
> >
> > This patch fixes it.
> >
> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>
> > ---
> > mm/vmscan.c | 1 +
> > 1 files changed, 1 insertions(+), 0 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 1691ad0..bd1d035 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1932,6 +1932,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,
> > struct zone *zone, int nid)
> > {
> > struct scan_control sc = {
> > + .nr_to_reclaim = SWAP_CLUSTER_MAX,
> > .may_writepage = !laptop_mode,
> > .may_unmap = 1,
> > .may_swap = !noswap,
>
> Could you please do some additional testing on
>
> 1. How far does this push pages (in terms of when limit is hit)?

32 pages per mem_cgroup_shrink_node_zone().

That said, the algorithm is here.

1. call mem_cgroup_largest_soft_limit_node()
calculate largest cgroup
2. call mem_cgroup_shrink_node_zone() and shrink 32 pages
3. goto 1 if limit is still exceed.

If it's not your intention, can you please your intended algorithm?


> 2. Did you hit a problem with the current setting or is it a review
> fix?

I've found this by review. and my patch works fine on my test environment.
Of cource, if you do _not_ run the code on heavy pressure, your original code
works too.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/