Re: [PATCH] alloc_tag: add per-NUMA node stats

From: Kent Overstreet
Date: Thu Jun 12 2025 - 11:43:50 EST


On Thu, Jun 12, 2025 at 01:36:05PM +0800, David Wang wrote:
> Hi,
>
> On Tue, 10 Jun 2025 17:30:53 -0600 Casey Chen <cachen@xxxxxxxxxxxxxxx> wrote:
> > Add support for tracking per-NUMA node statistics in /proc/allocinfo.
> > Previously, each alloc_tag had a single set of counters (bytes and
> > calls), aggregated across all CPUs. With this change, each CPU can
> > maintain separate counters for each NUMA node, allowing finer-grained
> > memory allocation profiling.
> >
> > This feature is controlled by the new
> > CONFIG_MEM_ALLOC_PROFILING_PER_NUMA_STATS option:
> >
> > * When enabled (=y), the output includes per-node statistics following
> > the total bytes/calls:
> >
> > <size> <calls> <tag info>
> > ...
> > 315456 9858 mm/dmapool.c:338 func:pool_alloc_page
> > nid0 94912 2966
> > nid1 220544 6892
> > 7680 60 mm/dmapool.c:254 func:dma_pool_create
> > nid0 4224 33
> > nid1 3456 27
> >
> > * When disabled (=n), the output remains unchanged:
> > <size> <calls> <tag info>
> > ...
> > 315456 9858 mm/dmapool.c:338 func:pool_alloc_page
> > 7680 60 mm/dmapool.c:254 func:dma_pool_create
> >
> > To minimize memory overhead, per-NUMA stats counters are dynamically
> > allocated using the percpu allocator. PERCPU_DYNAMIC_RESERVE has been
> > increased to ensure sufficient space for in-kernel alloc_tag counters.
> >
> > For in-kernel alloc_tag instances, pcpu_alloc_noprof() is used to
> > allocate counters. These allocations are excluded from the profiling
> > statistics themselves.
>
> Considering NUMA balance, I have two questions:
> 1. Do we need the granularity of calling sites?
> We need that granularity to identify a possible memory leak, or somewhere
> we can optimize its memory usage.
> But for NUMA unbalance, the calling site would mostly be *innocent*, the
> clue normally lies in the cpu making memory allocation, memory interface, etc...
> The point is, when NUMA unbalance happened, can it be fixed by adjusting the calling sites?
> Isn't <cpu, memory interface/slab name, numa id> enough to be used as key for numa
> stats analysis?

kmalloc_node().

Per callsite is the right granularity.

But AFAIK correlating profiling information with the allocation is still
an entirely manual process, so that's the part I'm interested in right
now.

Under the hood memory allocation profiling gives you the ability to map
any specific allocation to the line of code that owns it - that is, map
kernel virtual address to codetag.

But I don't know if perf collects _data_ addresses on cache misses. Does
anyone?