Re: [GIT PULL rcu/next] RCU commits for 3.1

From: Paul E. McKenney
Date: Mon Nov 07 2011 - 12:01:39 EST


On Mon, Nov 07, 2011 at 05:35:56PM +0100, Peter Zijlstra wrote:
> On Mon, 2011-11-07 at 16:16 +0000, Stephane Eranian wrote:
> > On Mon, Nov 7, 2011 at 3:15 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > > So far nobody seems to have stated if this is an actual problem or just
> > > shutting up lockdep-prove-rcu? I very much suspect the latter, in which
> > > case I really utterly hate the patch because it adds instructions to
> > > fast-paths just to kill a debug warning.
> > >
> > I think the core issue at stake here is not so much the cgroup disappearing.
> > It cannot go away because it is ref counted (perf_events does the necessary
> > css_get()/css_put()). But it is rather the task disappearing while we
> > are operating
> > on its state.
> >
> > I don't think task (prev or next) can disappear while we execute
> > perf_cgroup_sched_out()/perf_cgroup_sched_in() because we are in the context
> > switch code.
>
> Right.
>
> > What remains is:
> > * update_cgrp_time_from_event()
> > alway operates on current task
> >
> > * perf_cgroup_set_timestamp()
> >
> > - perf_event_task_tick() -> cpu_ctx_sched_in() but in this case
> > it is on the current task
> > - perf_event_task_sched_in() in context switch code so I assume
> > it is safe
> > - __perf_event_enable() but it is called on current
> >
> > - perf_cgroup_switch()
> > * perf_cgroup_sched_in()/perf_cgroup_sched_out() -> context switch code
> >
> > * perf_cgroup_attach()
> > called from cgroup code. Does not appear to hold task_lock().
> > the routine already grabs the rcu_read_lock() but it that enough
> > to guarantee the task cannot
> > vanish. I would hope so, otherwise I think the cgroup attach
> > code has a problem.
>
> yeah, task_struct is rcu-freed

But we are not in an RCU read-side critical section, otherwise the splat
would not have happened. Or did I miss a turn in the analysis roadmap
above?

> > In summary, unless I am mistaken, it looks to me that we may not need
> > those new rcu_read_lock()
> > calls after all.
> >
> > Does anyone have a different analysis?
>
> The only other problem I could see is that perf_cgroup_sched_{in,out}
> can race against perf_cgroup_attach_task() and make the wrong decision.
> But then perf_cgroup_attach will call perf_cgroup_switch() to fix that
> up again.

If this really is a false positive, what should be used to get rid of
the splats?

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/