Re: [PATCH v13 01/17] preempt: Track NMI nesting to separate per-CPU counter

From: Peter Zijlstra

Date: Mon Oct 13 2025 - 16:00:56 EST


On Mon, Oct 13, 2025 at 11:48:03AM -0400, Lyude Paul wrote:
> From: Joel Fernandes <joelagnelf@xxxxxxxxxx>
>
> Move NMI nesting tracking from the preempt_count bits to a separate per-CPU
> counter (nmi_nesting). This is to free up the NMI bits in the preempt_count,
> allowing those bits to be repurposed for other uses. This also has the benefit
> of tracking more than 16-levels deep if there is ever a need.
>
> Suggested-by: Boqun Feng <boqun.feng@xxxxxxxxx>
> Signed-off-by: Joel Fernandes <joelaf@xxxxxxxxxx>
> Signed-off-by: Lyude Paul <lyude@xxxxxxxxxx>
> ---
> include/linux/hardirq.h | 17 +++++++++++++----
> kernel/softirq.c | 2 ++
> rust/kernel/alloc/kvec.rs | 5 +----
> rust/kernel/cpufreq.rs | 3 +--
> 4 files changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
> index d57cab4d4c06f..177eed1de35cc 100644
> --- a/include/linux/hardirq.h
> +++ b/include/linux/hardirq.h
> @@ -10,6 +10,8 @@
> #include <linux/vtime.h>
> #include <asm/hardirq.h>
>
> +DECLARE_PER_CPU(unsigned int, nmi_nesting);

Urgh, and it isn't even in the same cacheline as the preempt_count :/