Re: [PATCH v3 2/2] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
From: Peter Zijlstra
Date: Mon Jul 07 2025 - 07:20:21 EST
On Fri, Jul 04, 2025 at 02:07:43PM -0300, Wander Lairson Costa wrote:
> Similar to the IRQ tracepoint, the preempt tracepoints are typically
> disabled in production systems due to the significant overhead they
> introduce even when not in use.
>
> The overhead primarily comes from two sources: First, when tracepoints
> are compiled into the kernel, preempt_count_add() and preempt_count_sub()
> become external function calls rather than inlined operations. Second,
> these functions perform unnecessary preempt_count() checks even when the
> tracepoint itself is disabled.
>
> This optimization introduces an early check of the tracepoint static key,
> which allows us to skip both the function call overhead and the redundant
> preempt_count() checks when tracing is disabled. The change maintains all
> existing functionality when tracing is active while significantly
> reducing overhead for the common case where tracing is inactive.
>
This one in particular I worry about the code gen impact. There are a
*LOT* of preempt_{dis,en}able() sites in the kernel and now they all get
this static branch and call crud on.
We spend significant effort to make preempt_{dis,en}able() as small as
possible.