Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run

From: Arnaldo Carvalho de Melo
Date: Tue Aug 23 2022 - 13:07:22 EST


Em Mon, Aug 22, 2022 at 02:33:51PM -0700, Ian Rogers escreveu:
> If a weak group is broken then the reset_group flag remains set for
> the next run. Having reset_group set means the counter isn't created
> and ultimately a segfault.
>
> A simple reproduction of this is:
> perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> which will be added as a test in the next patch.

So doing this on that existing BPF related loop may solve the problem,
but for someone looking just at the source code, without any comment,
may be cryptic, no?

And then the fixes tags talks about affinity, adding a bit more
confusion, albeit being the part that does the weak logic :-\

Can we have a comment just before:

+ counter->reset_group = false;

Stating that this is needed only when using -r?

- Arnaldo

> Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> Signed-off-by: Ian Rogers <irogers@xxxxxxxxxx>
> ---
> tools/perf/builtin-stat.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 7fb81a44672d..54cd29d07ca8 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> }
>
> evlist__for_each_entry(evsel_list, counter) {
> + counter->reset_group = false;
> if (bpf_counter__load(counter, &target))
> return -1;
> if (!evsel__is_bpf(counter))
> --
> 2.37.2.609.g9ff673ca1a-goog

--

- Arnaldo