Re: [patch V3 06/22] bpf/trace: Remove redundant preempt_disable from trace_call_bpf()

From: Thomas Gleixner
Date: Mon Feb 24 2020 - 15:43:29 EST


Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> writes:
> On Mon, Feb 24, 2020 at 03:01:37PM +0100, Thomas Gleixner wrote:
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace
>> if (in_nmi()) /* not supported yet */
>> return 1;
>>
>> - preempt_disable();
>> + cant_sleep();
>>
>> if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
>> /*
>> @@ -115,7 +115,6 @@ unsigned int trace_call_bpf(struct trace
>>
>> out:
>> __this_cpu_dec(bpf_prog_active);
>> - preempt_enable();
>
> My testing uncovered that above was too aggressive:
> [ 41.533438] BUG: assuming atomic context at kernel/trace/bpf_trace.c:86
> [ 41.534265] in_atomic(): 0, irqs_disabled(): 0, pid: 2348, name: test_progs
> [ 41.536907] Call Trace:
> [ 41.537167] dump_stack+0x75/0xa0
> [ 41.537546] __cant_sleep.cold.105+0x8b/0xa3
> [ 41.538018] ? exit_to_usermode_loop+0x77/0x140
> [ 41.538493] trace_call_bpf+0x4e/0x2e0
> [ 41.538908] __uprobe_perf_func.isra.15+0x38f/0x690
> [ 41.539399] ? probes_profile_seq_show+0x220/0x220
> [ 41.539962] ? __mutex_lock_slowpath+0x10/0x10
> [ 41.540412] uprobe_dispatcher+0x5de/0x8f0
> [ 41.540875] ? uretprobe_dispatcher+0x7c0/0x7c0
> [ 41.541404] ? down_read_killable+0x200/0x200
> [ 41.541852] ? __kasan_kmalloc.constprop.6+0xc1/0xd0
> [ 41.542356] uprobe_notify_resume+0xacf/0x1d60

Duh. I missed that particular callchain.

> The following fixes it:
>
> commit 7b7b71ff43cc0b15567b60c38a951c8a2cbc97f0 (HEAD -> bpf-next)
> Author: Alexei Starovoitov <ast@xxxxxxxxxx>
> Date: Mon Feb 24 11:27:15 2020 -0800
>
> bpf: disable migration for bpf progs attached to uprobe
>
> trace_call_bpf() no longer disables preemption on its own.
> All callers of this function has to do it explicitly.
>
> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
>
> diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> index 18d16f3ef980..7581f5eb6091 100644
> --- a/kernel/trace/trace_uprobe.c
> +++ b/kernel/trace/trace_uprobe.c
> @@ -1333,8 +1333,15 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
> int size, esize;
> int rctx;
>
> - if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> - return;
> + if (bpf_prog_array_valid(call)) {
> + u32 ret;
> +
> + migrate_disable();
> + ret = trace_call_bpf(call, regs);
> + migrate_enable();
> + if (!ret)
> + return;
> + }
>
> But looking at your patch cant_sleep() seems unnecessary strong.
> Should it be cant_migrate() instead?

Yes, if we go with the migrate_disable(). OTOH, having a
preempt_disable() in that uprobe callsite should work as well, then we
can keep the cant_sleep() check which covers all other callsites
properly. No strong opinion though.

> And two calls to __this_cpu*() replaced with this_cpu*() ?

See above.

> If you can ack it I can fix it up in place and apply the whole thing.

Ack.

Thanks,

tglx