Re: [PATCH] memcg: don't do cleanup manually if mem_cgroup_css_online()fails

From: Glauber Costa
Date: Tue Apr 02 2013 - 08:21:48 EST


On 04/02/2013 04:16 PM, Michal Hocko wrote:
> On Tue 02-04-13 15:35:28, Li Zefan wrote:
> [...]
>> @@ -6247,16 +6247,7 @@ mem_cgroup_css_online(struct cgroup *cont)
>>
>> error = memcg_init_kmem(memcg, &mem_cgroup_subsys);
>> mutex_unlock(&memcg_create_mutex);
>> - if (error) {
>> - /*
>> - * We call put now because our (and parent's) refcnts
>> - * are already in place. mem_cgroup_put() will internally
>> - * call __mem_cgroup_free, so return directly
>> - */
>> - mem_cgroup_put(memcg);
>> - if (parent->use_hierarchy)
>> - mem_cgroup_put(parent);
>> - }
>> +
>> return error;
>> }
>
> The mem_cgroup_put(parent) part is incorrect because mem_cgroup_put goes
> up the hierarchy already but I do not think mem_cgroup_put(memcg) should
> go away as well. Who is going to free the last reference then?
>
> Maybe I am missing something but we have:
> cgroup_create
> css = ss->css_alloc(cgrp)
> mem_cgroup_css_alloc
> atomic_set(&memcg->refcnt, 1)
> online_css(ss, cgrp)
> mem_cgroup_css_online
> error = memcg_init_kmem # fails
> goto err_destroy
> err_destroy:
> cgroup_destroy_locked(cgrp)
> offline_css
> mem_cgroup_css_offline
>
> no mem_cgroup_put on the way.
>

static void mem_cgroup_css_free(struct cgroup *cont)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);

kmem_cgroup_destroy(memcg);

mem_cgroup_put(memcg);
}

kernel/cgroup.c:
err_free_all:
for_each_subsys(root, ss) {
if (cgrp->subsys[ss->subsys_id])
ss->css_free(cgrp);
}

As I've said, this error path predates kmemcg for a long time. I wasn't
still able to identify precisely why it was working - assuming it was
indeed working (I remember having tested it with handcrafted manual
errors, but memory can always fail...).

But our best theory so far is that the cgroup rework started calling the
free function that wasn't called before.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/