Re: [patch] mm: vmscan implement per-zone shrinkers

From: KOSAKI Motohiro
Date: Sun Nov 14 2010 - 19:50:51 EST


> > @@ -1835,8 +1978,6 @@ static void shrink_zone(int priority, st
> > break;
> > }
> >
> > - sc->nr_reclaimed = nr_reclaimed;
> > -
> > /*
> > * Even if we did not try to evict anon pages at all, we want to
> > * rebalance the anon lru active/inactive ratio.
> > @@ -1844,6 +1985,23 @@ static void shrink_zone(int priority, st
> > if (inactive_anon_is_low(zone, sc))
> > shrink_active_list(SWAP_CLUSTER_MAX, zone, sc, priority, 0);
> >
> > + /*
> > + * Don't shrink slabs when reclaiming memory from
> > + * over limit cgroups
> > + */
> > + if (sc->may_reclaim_slab) {
> > + struct reclaim_state *reclaim_state = current->reclaim_state;
> > +
> > + shrink_slab(zone, sc->nr_scanned - nr_scanned,
>
> Doubtful calculation. What mean "sc->nr_scanned - nr_scanned"?
> I think nr_scanned simply keep old slab balancing behavior.

And per-zone reclaim can lead to new issue. On 32bit highmem system,
theorically the system has following memory usage.

ZONE_HIGHMEM: 100% used for page cache
ZONE_NORMAL: 100% used for slab

So, traditional page-cache/slab balancing may not work. I think following
new calculation or somethinhg else is necessary.

if (zone_reclaimable_pages() > NR_SLAB_RECLAIMABLE) {
using current calculation
} else {
shrink number of "objects >> reclaim-priority" objects
(as page cache scanning calculation)
}

However, it can be separate this patch, perhaps.



>
>
> > + lru_pages, global_lru_pages, sc->gfp_mask);
> > + if (reclaim_state) {
> > + nr_reclaimed += reclaim_state->reclaimed_slab;
> > + reclaim_state->reclaimed_slab = 0;
> > + }
> > + }
> > +
> > + sc->nr_reclaimed = nr_reclaimed;
> > +
> > throttle_vm_writeout(sc->gfp_mask);
> > }


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/