Re: [PATCH 0/2] perf stat: add per-core count aggregation

From: Stephane Eranian
Date: Tue Feb 12 2013 - 12:26:19 EST


On Tue, Feb 12, 2013 at 6:23 PM, Andi Kleen <ak@xxxxxxxxxxxxxxx> wrote:
> On Tue, Feb 12, 2013 at 03:09:26PM +0100, Stephane Eranian wrote:
>> This patch series contains improvement to the aggregation support
>> in perf stat.
>>
>> First, the aggregation code is refactored and a aggr_mode enum
>> is defined. There is also an important bug fix for the existing
>> per-socket aggregation.
>>
>> Second, the patch adds a new --aggr-core option to perf stat.
>
> Perhaps it's just me, but the option name is ugly (and sounds
> aggressive)
>
> --per-core perhaps?
>
I chose that name to be similar to ---aggr-socket.
But we could change both at this point.


> The idea itself is useful.
>
Yes, it is.

>> It aggregates counts per physical core and becomes useful on
>> systems with hyper-threading. The cores are presented per
>> socket: S0-C1, means socket 0 core 1. Note that the core number
>> represents its physical core id. As such, numbers may not always
>> be contiguous. All of this is based on topology information available
>> in sysfs.
>>
>> Per-core aggregation can be combined with interval printing:
>
> FWIW this would be much nicer if stat had a Kevents or Mevents mode.
> Usually we don't need all the digits. But that could be added separately
>
> Does it work for multiple events in parallel?

Yes, it does. It's all regular perf stat.


>>
>> # perf stat -a --aggr-core -I 1000 -e cycles sleep 100
>> # time core cpus counts events
>> 1.000101160 S0-C0 2 6,051,254,899 cycles
>> 1.000101160 S0-C1 2 6,379,230,776 cycles
>> 1.000101160 S0-C2 2 6,480,268,471 cycles
>> 1.000101160 S0-C3 2 6,110,514,321 cycles
>> 2.000663750 S0-C0 2 6,572,533,016 cycles
>> 2.000663750 S0-C1 2 6,378,623,674 cycles
>> 2.000663750 S0-C2 2 6,264,127,589 cycles
>> 2.000663750 S0-C3 2 6,305,346,613 cycles
>
> -Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/