Re: [tip:perf/core] perf: Per-pmu-per-cpu contexts

From: Paul E. McKenney
Date: Fri Sep 10 2010 - 11:38:00 EST


On Fri, Sep 10, 2010 at 04:54:29PM +0200, Frederic Weisbecker wrote:
> On Thu, Sep 09, 2010 at 07:51:53PM +0000, tip-bot for Peter Zijlstra wrote:
> > @@ -3745,18 +3757,20 @@ static void perf_event_task_ctx(struct perf_event_context *ctx,
> >
> > static void perf_event_task_event(struct perf_task_event *task_event)
> > {
> > - struct perf_cpu_context *cpuctx;
> > struct perf_event_context *ctx = task_event->task_ctx;
> > + struct perf_cpu_context *cpuctx;
> > + struct pmu *pmu;
> >
> > - rcu_read_lock();
> > - cpuctx = &get_cpu_var(perf_cpu_context);
> > - perf_event_task_ctx(&cpuctx->ctx, task_event);
> > + rcu_read_lock_sched();
> > + list_for_each_entry_rcu(pmu, &pmus, entry) {
> > + cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> > + perf_event_task_ctx(&cpuctx->ctx, task_event);
> > + }
> > if (!ctx)
> > ctx = rcu_dereference(current->perf_event_ctxp);
>
>
>
> So, you say below that it works because synchronize_srcu(), that
> waits for qs after touching pmus, implies synchronize_sched(), right?

Ook... My current plans to fold SRCU into TREE_RCU would invalidate
this assumption.

Maybe we need some sort of primitive that concurrently waits for
multiple types of RCU grace periods?

Thanx, Paul

> And I guess you picked rcu_read_lock_sched() here because that preempt_disable()
> at the same time.
>
> That looks complicated but I guess that works.
>
> That said there is also this rcu_dereference(current->perf_event_ctxp).
> Now, this ctx is released after srcu barrier right? So this should
> be srcu_dereference(). But then you seem to actually use rcu_read_lock_sched()
> as it's compatible, so this should be rcu_dereference_sched() ?
>
> With the current state, rcu will whine.
> Moreover there seem to be too much game between the different rcu
> flavours here, and that breaks the reviewers parsing.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/