[patch V2 10/20] trace/bpf: Use migrate disable in trace_call_bpf()

From: Thomas Gleixner
Date: Thu Feb 20 2020 - 15:57:02 EST


BPF does not require preemption disable. It only requires to stay on the
same CPU while running a program. Reflect this by replacing
preempt_disable/enable() with migrate_disable/enable() pairs.

On a non-RT kernel this maps to preempt_disable/enable().

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
---
kernel/trace/bpf_trace.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace
if (in_nmi()) /* not supported yet */
return 1;

- preempt_disable();
+ migrate_disable();

if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
/*
@@ -115,7 +115,7 @@ unsigned int trace_call_bpf(struct trace

out:
__this_cpu_dec(bpf_prog_active);
- preempt_enable();
+ migrate_enable();

return ret;
}