Re: [PATCH] ring-buffer: More precise time stamps for nested writes

From: Suresh E. Warrier
Date: Fri Mar 27 2015 - 01:39:02 EST



On 03/24/2015 06:19 PM, Steven Rostedt wrote:
> On Tue, 24 Mar 2015 18:10:05 -0500
>
..
..
> There is no architecture where disabling interrupts is cheap. Actually,
> enabling them is the killer. Doing function tracing shows the impact of
> this rather well, as it would disable and enable interrupts for every
> function call, which can get rather expensive in the sum of things, and
> it does skew the timings and can make it more difficult to debug
> heissenbugs.
>

On PPC64, enabling and disabling soft IRQs is simply a matter of
setting/clearing a per-CPU global variable (in the per-CPU PACA
structure), so it shouldn't be expensive unless an interrupt is
pending when we re-enable, which should be rare. I ran a series of
tests on a Power8 box and even with a cold cache, both the
local_irq_save and local_irq_restore routines only took a few
nanoseconds, although the restore was a little more expensive
than the save. (The restore took about 80-160 ns on a cold cache
and 20-30 ns on a warm one.)

> That said, I feel your pain. I've had some ideas about doing this
> without disabling interrupts.

That is excellent!

> But for now, what can be done is to have
> a flag that is set that will implement this or not. Using
> static_branch() to implement it such that when its off it has no effect.
>

Are you recommending that for now I use a static_branch() instead
of a CONFIG option to fix this? I could do that but the resulting
code will either be messier to read (with several if condition checks)
or will require some duplication of code. My assumption is that the
new CONFIG option when disabled should have negligible impact since
the compiler inlines the functions.

-suresh

> In the mean time, I can go and revisit trying to have better timings
> with nested writes.
>
> -- Steve
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/