Re: [RFC][PATCH v3 10/10] memcg : reclaim statistics

From: KAMEZAWA Hiroyuki
Date: Thu May 26 2011 - 21:21:48 EST


On Thu, 26 May 2011 18:17:04 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> Hi Kame:
>
> I applied the patch on top of mmotm-2011-05-12-15-52. After boot up, i
> keep getting the following crash by reading the
> /dev/cgroup/memory/memory.reclaim_stat
>
> [ 200.776366] Kernel panic - not syncing: Fatal exception
> [ 200.781591] Pid: 7535, comm: cat Tainted: G D W 2.6.39-mcg-DEV #130
> [ 200.788463] Call Trace:
> [ 200.790916] [<ffffffff81405a75>] panic+0x91/0x194
> [ 200.797096] [<ffffffff81408ac8>] oops_end+0xae/0xbe
> [ 200.803450] [<ffffffff810398d3>] die+0x5a/0x63
> [ 200.809366] [<ffffffff81408561>] do_trap+0x121/0x130
> [ 200.814427] [<ffffffff81037fe6>] do_divide_error+0x90/0x99
> [#1] SMP
> [ 200.821395] [<ffffffff81112bcb>] ? mem_cgroup_reclaim_stat_read+0x28/0xf0
> [ 200.829624] [<ffffffff81104509>] ? page_add_new_anon_rmap+0x7e/0x90
> [ 200.837372] [<ffffffff810fb7f8>] ? handle_pte_fault+0x28a/0x775
> [ 200.844773] [<ffffffff8140f0f5>] divide_error+0x15/0x20
> [ 200.851471] [<ffffffff81112bcb>] ? mem_cgroup_reclaim_stat_read+0x28/0xf0
> [ 200.859729] [<ffffffff810a4a01>] cgroup_seqfile_show+0x38/0x46
> [ 200.867036] [<ffffffff810a4d72>] ? cgroup_lock+0x17/0x17
> [ 200.872444] [<ffffffff81133f2c>] seq_read+0x182/0x361
> [ 200.878984] [<ffffffff8111a0c4>] vfs_read+0xab/0x107
> [ 200.885403] [<ffffffff8111a1e0>] sys_read+0x4a/0x6e
> [ 200.891764] [<ffffffff8140f469>] sysenter_dispatch+0x7/0x27
>
> I will debug it, but like to post here in case i missed some patches in between.
>

It must be mem->scanned is 0 and
mem->reclaimed * 100 /mem->scanned cause error.

It must be mem->reclaimed * 100 / (mem->scanned +1).

I'll fix. thank you for reporting.

Thanks,
-Kame


> --Ying
>
> On Wed, May 25, 2011 at 10:36 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >
> > This patch adds a file memory.reclaim_stat.
> >
> > This file shows following.
> > ==
> > recent_scan_success_ratio Â12 # recent reclaim/scan ratio.
> > limit_scan_pages 671 Â Â Â Â Â# scan caused by hitting limit.
> > limit_freed_pages 538 Â Â Â Â # freed pages by limit_scan
> > limit_elapsed_ns 518555076 Â Â# elapsed time in LRU scanning by limit.
> > soft_scan_pages 0 Â Â Â Â Â Â # scan caused by softlimit.
> > soft_freed_pages 0 Â Â Â Â Â Â# freed pages by soft_scan.
> > soft_elapsed_ns 0 Â Â Â Â Â Â # elapsed time in LRU scanning by softlimit.
> > margin_scan_pages 16744221 Â Â# scan caused by auto-keep-margin
> > margin_freed_pages 565943 Â Â # freed pages by auto-keep-margin.
> > margin_elapsed_ns 5545388791 Â# elapsed time in LRU scanning by auto-keep-margin
> >
> > This patch adds a new file rather than adding more stats to memory.stat. By it,
> > this support "reset" accounting by
> >
> > Â# echo 0 > .../memory.reclaim_stat
> >
> > This is good for debug and tuning.
> >
> > TODO:
> > Â- add Documentaion.
> >
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> > ---
> > Âmm/memcontrol.c | Â 87 ++++++++++++++++++++++++++++++++++++++++++++++++++------
> > Â1 file changed, 79 insertions(+), 8 deletions(-)
> >
> > Index: memcg_async/mm/memcontrol.c
> > ===================================================================
> > --- memcg_async.orig/mm/memcontrol.c
> > +++ memcg_async/mm/memcontrol.c
> > @@ -216,6 +216,13 @@ static void mem_cgroup_update_margin_to_
> > Âstatic void mem_cgroup_may_async_reclaim(struct mem_cgroup *mem);
> > Âstatic void mem_cgroup_reflesh_scan_ratio(struct mem_cgroup *mem);
> >
> > +enum scan_type {
> > + Â Â Â LIMIT_SCAN, Â Â /* scan memory because memcg hits limit */
> > + Â Â Â SOFT_SCAN, Â Â Â/* scan memory because of soft limit */
> > + Â Â Â MARGIN_SCAN, Â Â/* scan memory for making margin to limit */
> > + Â Â Â NR_SCAN_TYPES,
> > +};
> > +
> > Â/*
> > Â* The memory controller data structure. The memory controller controls both
> > Â* page cache and RSS per cgroup. We would eventually like to provide
> > @@ -300,6 +307,13 @@ struct mem_cgroup {
> >    Âunsigned long  scanned;
> >    Âunsigned long  reclaimed;
> >    Âunsigned long  next_scanratio_update;
> > + Â Â Â /* For statistics */
> > + Â Â Â struct {
> > + Â Â Â Â Â Â Â unsigned long nr_scanned_pages;
> > + Â Â Â Â Â Â Â unsigned long nr_reclaimed_pages;
> > + Â Â Â Â Â Â Â unsigned long elapsed_ns;
> > + Â Â Â } scan_stat[NR_SCAN_TYPES];
> > +
> > Â Â Â Â/*
> > Â Â Â Â * percpu counter.
> > Â Â Â Â */
> > @@ -1426,7 +1440,9 @@ unsigned int mem_cgroup_swappiness(struc
> >
> > Âstatic void __mem_cgroup_update_scan_ratio(struct mem_cgroup *mem,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âunsigned long scanned,
> > - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long reclaimed)
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long reclaimed,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long elapsed,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â enum scan_type type)
> > Â{
> > Â Â Â Âunsigned long limit;
> >
> > @@ -1439,6 +1455,9 @@ static void __mem_cgroup_update_scan_rat
> > Â Â Â Â Â Â Â Âmem->scanned /= 2;
> > Â Â Â Â Â Â Â Âmem->reclaimed /= 2;
> > Â Â Â Â}
> > + Â Â Â mem->scan_stat[type].nr_scanned_pages += scanned;
> > + Â Â Â mem->scan_stat[type].nr_reclaimed_pages += reclaimed;
> > + Â Â Â mem->scan_stat[type].elapsed_ns += elapsed;
> > Â Â Â Âspin_unlock(&mem->scan_stat_lock);
> > Â}
> >
> > @@ -1448,6 +1467,8 @@ static void __mem_cgroup_update_scan_rat
> > Â* @root : root memcg of hierarchy walk.
> > Â* @scanned : scanned pages
> > Â* @reclaimed: reclaimed pages.
> > + * @elapsed: used time for memory reclaim
> > + * @type : scan type as LIMIT_SCAN, SOFT_SCAN, MARGIN_SCAN.
> > Â*
> > Â* record scan/reclaim ratio to the memcg both to a child and it's root
> > Â* mem cgroup, which is a reclaim target. This value is used for
> > @@ -1457,11 +1478,14 @@ static void __mem_cgroup_update_scan_rat
> > Âstatic void mem_cgroup_update_scan_ratio(struct mem_cgroup *mem,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âstruct mem_cgroup *root,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âunsigned long scanned,
> > - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long reclaimed)
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long reclaimed,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â unsigned long elapsed,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â int type)
> > Â{
> > - Â Â Â __mem_cgroup_update_scan_ratio(mem, scanned, reclaimed);
> > + Â Â Â __mem_cgroup_update_scan_ratio(mem, scanned, reclaimed, elapsed, type);
> > Â Â Â Âif (mem != root)
> > - Â Â Â Â Â Â Â __mem_cgroup_update_scan_ratio(root, scanned, reclaimed);
> > + Â Â Â Â Â Â Â __mem_cgroup_update_scan_ratio(root, scanned, reclaimed,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â elapsed, type);
> >
> > Â}
> >
> > @@ -1906,6 +1930,7 @@ static int mem_cgroup_hierarchical_recla
> > Â Â Â Âbool is_kswapd = false;
> > Â Â Â Âunsigned long excess;
> > Â Â Â Âunsigned long nr_scanned;
> > + Â Â Â unsigned long start, end, elapsed;
> >
> > Â Â Â Âexcess = res_counter_soft_limit_excess(&root_mem->res) >> PAGE_SHIFT;
> >
> > @@ -1947,18 +1972,24 @@ static int mem_cgroup_hierarchical_recla
> > Â Â Â Â Â Â Â Â}
> > Â Â Â Â Â Â Â Â/* we use swappiness of local cgroup */
> > Â Â Â Â Â Â Â Âif (check_soft) {
> > + Â Â Â Â Â Â Â Â Â Â Â start = sched_clock();
> > Â Â Â Â Â Â Â Â Â Â Â Âret = mem_cgroup_shrink_node_zone(victim, gfp_mask,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Ânoswap, zone, &nr_scanned);
> > + Â Â Â Â Â Â Â Â Â Â Â end = sched_clock();
> > + Â Â Â Â Â Â Â Â Â Â Â elapsed = end - start;
> > Â Â Â Â Â Â Â Â Â Â Â Â*total_scanned += nr_scanned;
> > Â Â Â Â Â Â Â Â Â Â Â Âmem_cgroup_soft_steal(victim, is_kswapd, ret);
> > Â Â Â Â Â Â Â Â Â Â Â Âmem_cgroup_soft_scan(victim, is_kswapd, nr_scanned);
> > Â Â Â Â Â Â Â Â Â Â Â Âmem_cgroup_update_scan_ratio(victim,
> > - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â root_mem, nr_scanned, ret);
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â root_mem, nr_scanned, ret, elapsed, SOFT_SCAN);
> > Â Â Â Â Â Â Â Â} else {
> > + Â Â Â Â Â Â Â Â Â Â Â start = sched_clock();
> > Â Â Â Â Â Â Â Â Â Â Â Âret = try_to_free_mem_cgroup_pages(victim, gfp_mask,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Ânoswap, &nr_scanned);
> > + Â Â Â Â Â Â Â Â Â Â Â end = sched_clock();
> > + Â Â Â Â Â Â Â Â Â Â Â elapsed = end - start;
> > Â Â Â Â Â Â Â Â Â Â Â Âmem_cgroup_update_scan_ratio(victim,
> > - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â root_mem, nr_scanned, ret);
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â root_mem, nr_scanned, ret, elapsed, LIMIT_SCAN);
> > Â Â Â Â Â Â Â Â}
> > Â Â Â Â Â Â Â Âcss_put(&victim->css);
> > Â Â Â Â Â Â Â Â/*
> > @@ -4003,7 +4034,7 @@ static void mem_cgroup_async_shrink_work
> > Â Â Â Âstruct delayed_work *dw = to_delayed_work(work);
> > Â Â Â Âstruct mem_cgroup *mem, *victim;
> > Â Â Â Âlong nr_to_reclaim;
> > - Â Â Â unsigned long nr_scanned, nr_reclaimed;
> > + Â Â Â unsigned long nr_scanned, nr_reclaimed, start, end;
> > Â Â Â Âint delay = 0;
> >
> > Â Â Â Âmem = container_of(dw, struct mem_cgroup, async_work);
> > @@ -4022,9 +4053,12 @@ static void mem_cgroup_async_shrink_work
> > Â Â Â Âif (!victim)
> > Â Â Â Â Â Â Â Âgoto finish_scan;
> >
> > + Â Â Â start = sched_clock();
> > Â Â Â Ânr_reclaimed = mem_cgroup_shrink_rate_limited(victim, nr_to_reclaim,
> > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â&nr_scanned);
> > - Â Â Â mem_cgroup_update_scan_ratio(victim, mem, nr_scanned, nr_reclaimed);
> > + Â Â Â end = sched_clock();
> > + Â Â Â mem_cgroup_update_scan_ratio(victim, mem, nr_scanned, nr_reclaimed,
> > + Â Â Â Â Â Â Â Â Â Â Â end - start, MARGIN_SCAN);
> > Â Â Â Âcss_put(&victim->css);
> >
> > Â Â Â Â/* If margin is enough big, stop */
> > @@ -4680,6 +4714,38 @@ static int mem_control_stat_show(struct
> > Â Â Â Âreturn 0;
> > Â}
> >
> > +static int mem_cgroup_reclaim_stat_read(struct cgroup *cont, struct cftype *cft,
> > + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âstruct cgroup_map_cb *cb)
> > +{
> > + Â Â Â struct mem_cgroup *mem = mem_cgroup_from_cont(cont);
> > + Â Â Â u64 val;
> > + Â Â Â int i; /* for indexing scan_stat[] */
> > +
> > + Â Â Â val = mem->reclaimed * 100 / mem->scanned;
> > + Â Â Â cb->fill(cb, "recent_scan_success_ratio", val);
> > + Â Â Â i Â= LIMIT_SCAN;
> > + Â Â Â cb->fill(cb, "limit_scan_pages", mem->scan_stat[i].nr_scanned_pages);
> > + Â Â Â cb->fill(cb, "limit_freed_pages", mem->scan_stat[i].nr_reclaimed_pages);
> > + Â Â Â cb->fill(cb, "limit_elapsed_ns", mem->scan_stat[i].elapsed_ns);
> > + Â Â Â i = SOFT_SCAN;
> > + Â Â Â cb->fill(cb, "soft_scan_pages", mem->scan_stat[i].nr_scanned_pages);
> > + Â Â Â cb->fill(cb, "soft_freed_pages", mem->scan_stat[i].nr_reclaimed_pages);
> > + Â Â Â cb->fill(cb, "soft_elapsed_ns", mem->scan_stat[i].elapsed_ns);
> > + Â Â Â i = MARGIN_SCAN;
> > + Â Â Â cb->fill(cb, "margin_scan_pages", mem->scan_stat[i].nr_scanned_pages);
> > + Â Â Â cb->fill(cb, "margin_freed_pages", mem->scan_stat[i].nr_reclaimed_pages);
> > + Â Â Â cb->fill(cb, "margin_elapsed_ns", mem->scan_stat[i].elapsed_ns);
> > + Â Â Â return 0;
> > +}
> > +
> > +static int mem_cgroup_reclaim_stat_reset(struct cgroup *cgrp, unsigned int event)
> > +{
> > + Â Â Â struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
> > + Â Â Â memset(mem->scan_stat, 0, sizeof(mem->scan_stat));
> > + Â Â Â return 0;
> > +}
> > +
> > +
> > Â/*
> > Â* User flags for async_control is a subset of mem->async_flags. But
> > Â* this needs to be defined independently to hide implemation details.
> > @@ -5163,6 +5229,11 @@ static struct cftype mem_cgroup_files[]
> > Â Â Â Â Â Â Â Â.open = mem_control_numa_stat_open,
> > Â Â Â Â},
> > Â#endif
> > + Â Â Â {
> > + Â Â Â Â Â Â Â .name = "reclaim_stat",
> > + Â Â Â Â Â Â Â .read_map = mem_cgroup_reclaim_stat_read,
> > + Â Â Â Â Â Â Â .trigger = mem_cgroup_reclaim_stat_reset,
> > + Â Â Â }
> > Â};
> >
> > Â#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/