Re: [GIT PULL rcu/next] RCU commits for 3.1

From: Stephane Eranian
Date: Mon Nov 07 2011 - 12:12:51 EST


Paul,

On Mon, Nov 7, 2011 at 4:56 PM, Paul E. McKenney
<paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> On Mon, Nov 07, 2011 at 05:35:56PM +0100, Peter Zijlstra wrote:
>> On Mon, 2011-11-07 at 16:16 +0000, Stephane Eranian wrote:
>> > On Mon, Nov 7, 2011 at 3:15 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> > > So far nobody seems to have stated if this is an actual problem or just
>> > > shutting up lockdep-prove-rcu? I very much suspect the latter, in which
>> > > case I really utterly hate the patch because it adds instructions to
>> > > fast-paths just to kill a debug warning.
>> > >
>> > I think the core issue at stake here is not so much the cgroup disappearing.
>> > It cannot go away because it is ref counted (perf_events does the necessary
>> > css_get()/css_put()). But it is rather the task disappearing while we
>> > are operating
>> > on its state.
>> >
>> > I don't think task (prev or next) can disappear while we execute
>> > perf_cgroup_sched_out()/perf_cgroup_sched_in() because we are in the context
>> > switch code.
>>
>> Right.
>>
>> > What remains is:
>> > Â * update_cgrp_time_from_event()
>> > Â Â alway operates on current task
>> >
>> > Â * perf_cgroup_set_timestamp()
>> >
>> > Â Â Â Â- perf_event_task_tick() -> cpu_ctx_sched_in() but in this case
>> > it is on the current task
>> > Â Â Â Â- perf_event_task_sched_in() in context switch code so I assume
>> > it is safe
>> > Â Â Â Â- __perf_event_enable() but it is called on current
>> >
>> > Â - perf_cgroup_switch()
>> > Â Â * perf_cgroup_sched_in()/perf_cgroup_sched_out() -> context switch code
>> >
>> > Â Â * perf_cgroup_attach()
>> > Â Â Â called from cgroup code. Does not appear to hold task_lock().
>> > Â Â Â the routine already grabs the rcu_read_lock() but it that enough
>> > to guarantee the task cannot
>> > Â Â Â vanish. I would hope so, otherwise I think the cgroup attach
>> > code has a problem.
>>
>> yeah, task_struct is rcu-freed
>
> But we are not in an RCU read-side critical section, otherwise the splat
> would not have happened. ÂOr did I miss a turn in the analysis roadmap
> above?
>
>> > In summary, unless I am mistaken, it looks to me that we may not need
>> > those new rcu_read_lock()
>> > calls after all.
>> >
>> > Does anyone have a different analysis?
>>
>> The only other problem I could see is that perf_cgroup_sched_{in,out}
>> can race against perf_cgroup_attach_task() and make the wrong decision.
>> But then perf_cgroup_attach will call perf_cgroup_switch() to fix that
>> up again.
>
> If this really is a false positive, what should be used to get rid of
> the splats?
>
I think on that path:

>>> [<8108aa02>] perf_event_enable_on_exec+0x1d2/0x1e0
>>> [<81063764>] ? __lock_release+0x54/0xb0
>>> [<8108cca8>] perf_event_comm+0x18/0x60
>>> [<810d1abd>] ? set_task_comm+0x5d/0x80
>>> [<81af622d>] ? _raw_spin_unlock+0x1d/0x40
>>> [<810d1ac4>] set_task_comm+0x64/0x80

We are neither holding the rcu_read_lock() nor the task_lock() but we
are operating on the current task. The task cannot just vanish. So
the rcu_dereference() and lock_is_held() macros may detect a false
positive in that case. Yet, I doubt this would be the only place....
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/