Re: [PATCH v4 16/27] tracing: Remove regular RCU context for _rcuidle tracepoints (again)

From: Alexei Starovoitov
Date: Fri Mar 06 2020 - 10:51:34 EST


On Fri, Mar 6, 2020 at 3:31 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Fri, Mar 06, 2020 at 11:43:35AM +0100, Peter Zijlstra wrote:
> > On Fri, Feb 21, 2020 at 02:34:32PM +0100, Peter Zijlstra wrote:
> > > Effectively revert commit 865e63b04e9b2 ("tracing: Add back in
> > > rcu_irq_enter/exit_irqson() for rcuidle tracepoints") now that we've
> > > taught perf how to deal with not having an RCU context provided.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> > > Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
> > > ---
> > > include/linux/tracepoint.h | 8 ++------
> > > 1 file changed, 2 insertions(+), 6 deletions(-)
> > >
> > > --- a/include/linux/tracepoint.h
> > > +++ b/include/linux/tracepoint.h
> > > @@ -179,10 +179,8 @@ static inline struct tracepoint *tracepo
> > > * For rcuidle callers, use srcu since sched-rcu \
> > > * doesn't work from the idle path. \
> > > */ \
> > > - if (rcuidle) { \
> > > + if (rcuidle) \
> > > __idx = srcu_read_lock_notrace(&tracepoint_srcu);\
> > > - rcu_irq_enter_irqsave(); \
> > > - } \
> > > \
> > > it_func_ptr = rcu_dereference_raw((tp)->funcs); \
> > > \
> > > @@ -194,10 +192,8 @@ static inline struct tracepoint *tracepo
> > > } while ((++it_func_ptr)->func); \
> > > } \
> > > \
> > > - if (rcuidle) { \
> > > - rcu_irq_exit_irqsave(); \
> > > + if (rcuidle) \
> > > srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\
> > > - } \
> > > \
> > > preempt_enable_notrace(); \
> > > } while (0)
> >
> > So what happens when BPF registers for these tracepoints? BPF very much
> > wants RCU on AFAIU.
>
> I suspect we needs something like this...
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index a2f15222f205..67a39dbce0ce 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1475,11 +1475,13 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
> static __always_inline
> void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
> {
> + int rcu_flags = trace_rcu_enter();
> rcu_read_lock();
> preempt_disable();
> (void) BPF_PROG_RUN(prog, args);
> preempt_enable();
> rcu_read_unlock();
> + trace_rcu_exit(rcu_flags);

One big NACK.
I will not slowdown 99% of cases because of one dumb user.
Absolutely no way.