Re: [RFC PATCH v5 09/16] perf stat: Add function to handle special events in hardware-grouping

From: Ian Rogers
Date: Wed Apr 17 2024 - 02:13:32 EST


On Fri, Apr 12, 2024 at 2:08 PM <weilin.wang@xxxxxxxxx> wrote:
>
> From: Weilin Wang <weilin.wang@xxxxxxxxx>
>
> There are some special events like topdown events and TSC that are not
> described in pmu-event JSON files. Add support to handle this type of
> events. This should be considered as a temporary solution because including
> these events in JSON files would be a better solution.

What is going to happen in other cases uncore, software and core
events for other architectures? Topdown is annoyingly special but the
MSR events should be similar to tool events like duration_time, in
that they don't have grouping restrictions.

Thanks,
Ian

>
> Signed-off-by: Weilin Wang <weilin.wang@xxxxxxxxx>
> ---
> tools/perf/util/metricgroup.c | 38 ++++++++++++++++++++++++++++++++++-
> 1 file changed, 37 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
> index 04d988ace734..681aacc15787 100644
> --- a/tools/perf/util/metricgroup.c
> +++ b/tools/perf/util/metricgroup.c
> @@ -162,6 +162,20 @@ struct metric {
>
> /* Maximum number of counters per PMU*/
> #define NR_COUNTERS 16
> +/* Special events that are not described in pmu-event JSON files.
> + * topdown-* and TSC use dedicated registers, set as free
> + * counter for grouping purpose
> + */
> +enum special_events {
> + TOPDOWN = 0,
> + TSC = 1,
> + SPECIAL_EVENT_MAX,
> +};
> +
> +static const char *const special_event_names[SPECIAL_EVENT_MAX] = {
> + "topdown-",
> + "TSC",
> +};
>
> /**
> * An event used in a metric. This info is for metric grouping.
> @@ -2142,6 +2156,15 @@ static int create_grouping(struct list_head *pmu_info_list,
> return ret;
> };
>
> +static bool is_special_event(const char *id)
> +{
> + for (int i = 0; i < SPECIAL_EVENT_MAX; i++) {
> + if (!strncmp(id, special_event_names[i], strlen(special_event_names[i])))
> + return true;
> + }
> + return false;
> +}
> +
> /**
> * hw_aware_build_grouping - Build event groupings by reading counter
> * requirement of the events and counter available on the system from
> @@ -2166,6 +2189,17 @@ static int hw_aware_build_grouping(struct expr_parse_ctx *ctx __maybe_unused,
> hashmap__for_each_entry(ctx->ids, cur, bkt) {
> const char *id = cur->pkey;
>
> + if (is_special_event(id)) {
> + struct metricgroup__event_info *event;
> +
> + event = event_info__new(id, "default_core", "0",
> + /*free_counter=*/true);
> + if (!event)
> + goto err_out;
> +
> + list_add(&event->nd, &event_info_list);
> + continue;
> + }
> ret = get_metricgroup_events(id, etable, &event_info_list);
> if (ret)
> goto err_out;
> @@ -2636,8 +2670,10 @@ int metricgroup__parse_groups(struct evlist *perf_evlist,
> ret = hw_aware_parse_groups(perf_evlist, pmu, str,
> metric_no_threshold, user_requested_cpu_list, system_wide,
> /*fake_pmu=*/NULL, metric_events, table);
> - if (!ret)
> + if (!ret) {
> + pr_info("Hardware aware grouping completed\n");
> return 0;
> + }
> }
>
> return parse_groups(perf_evlist, pmu, str, metric_no_group, metric_no_merge,
> --
> 2.42.0
>