Re: [PATCH] hugetlb/cgroup: Simplify pre_destroy callback

From: Kamezawa Hiroyuki
Date: Thu Jul 19 2012 - 06:27:48 EST


(2012/07/19 18:41), Aneesh Kumar K.V wrote:
Li Zefan <lizefan@xxxxxxxxxx> writes:

on 2012/7/19 10:55, Aneesh Kumar K.V wrote:

Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> writes:

On Wed, 18 Jul 2012 11:04:09 +0530
"Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> wrote:

From: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx>

Since we cannot fail in hugetlb_cgroup_move_parent, we don't really
need to check whether cgroup have any change left after that. Also skip
those hstates for which we don't have any charge in this cgroup.

...

+ for_each_hstate(h) {
+ /*
+ * if we don't have any charge, skip this hstate
+ */
+ idx = hstate_index(h);
+ if (res_counter_read_u64(&h_cg->hugepage[idx], RES_USAGE) == 0)
+ continue;
+ spin_lock(&hugetlb_lock);
+ list_for_each_entry(page, &h->hugepage_activelist, lru)
+ hugetlb_cgroup_move_parent(idx, cgroup, page);
+ spin_unlock(&hugetlb_lock);
+ VM_BUG_ON(res_counter_read_u64(&h_cg->hugepage[idx], RES_USAGE));
+ }
out:
return ret;
}

This looks fishy.

We test RES_USAGE before taking hugetlb_lock. What prevents some other
thread from increasing RES_USAGE after that test?

After walking the list we test RES_USAGE after dropping hugetlb_lock.
What prevents another thread from incrementing RES_USAGE before that
test, triggering the BUG?

IIUC core cgroup will prevent a new task getting added to the cgroup
when we are in pre_destroy. Since we already check that the cgroup doesn't
have any task, the RES_USAGE cannot increase in pre_destroy.



You're wrong here. We release cgroup_lock before calling pre_destroy and retrieve
the lock after that, so a task can be attached to the cgroup in this interval.


But that means rmdir can be racy right ? What happens if the task got
added, allocated few pages and then moved out ? We still would have task
count 0 but few pages, which we missed to to move to parent cgroup.


That's a problem even if it's verrrry unlikely.
I'd like to look into it and fix the race in cgroup layer.
But I'm sorry I'm a bit busy in these days...

Thanks,
-Kame


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/