Re: [PATCH] perf tool sort: Use default sort if evlist is empty

From: Namhyung Kim
Date: Sat Jul 22 2017 - 19:11:26 EST


On Fri, Jul 21, 2017 at 01:02:50PM -0700, David Carrillo-Cisneros wrote:
> On Fri, Jul 21, 2017 at 12:44 AM, Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
> > On Thu, Jul 20, 2017 at 10:11:57PM -0700, David Carrillo-Cisneros wrote:
> >> Fixes bug noted by Jiri in https://lkml.org/lkml/2017/6/13/755 and caused
> >> by commit d49dadea7862 ("perf tools: Make 'trace' or 'trace_fields' sort
> >> key default for tracepoint events")
> >> not taking into account that evlist is empty in pipe-mode.
> >>
> >> Before this commit, pipe mode will only show bogus "100.00% N/A" instead
> >> of correct output as follows:
> >>
> >> $ perf record -o - sleep 1 | perf report -i -
> >> # To display the perf.data header info, please use --header/--header-only options.
> >> #
> >> [ perf record: Woken up 1 times to write data ]
> >> [ perf record: Captured and wrote 0.000 MB - ]
> >> #
> >> # Total Lost Samples: 0
> >> #
> >> # Samples: 8 of event 'cycles:ppH'
> >> # Event count (approx.): 145658
> >> #
> >> # Overhead Trace output
> >> # ........ ............
> >> #
> >> 100.00% N/A
> >>
> >> Correct output, after patch:
> >>
> >> $ perf record -o - sleep 1 | perf report -i -
> >> # To display the perf.data header info, please use --header/--header-only options.
> >> #
> >> [ perf record: Woken up 1 times to write data ]
> >> [ perf record: Captured and wrote 0.000 MB - ]
> >> #
> >> # Total Lost Samples: 0
> >> #
> >> # Samples: 8 of event 'cycles:ppH'
> >> # Event count (approx.): 191331
> >> #
> >> # Overhead Command Shared Object Symbol
> >> # ........ ....... ................. .................................
> >> #
> >> 81.63% sleep libc-2.19.so [.] _exit
> >> 13.58% sleep ld-2.19.so [.] do_lookup_x
> >> 2.34% sleep [kernel.kallsyms] [k] context_switch
> >> 2.34% sleep libc-2.19.so [.] __GI___libc_nanosleep
> >> 0.11% perf [kernel.kallsyms] [k] __intel_pmu_enable_a
> >>
> >
> > I wonder we could reinit the sortorder once we know what
> > events we have in pipe, and recognize the tracepoint output
> > properly:
>
> I see this hard to do since, at any given point while traversing the
> pipe's content, the best we can do is guess that we've seen all event
> types. Then we'd need to fall back and redo the output whenever a new
> sample refutes our last guess.

After reading feature event, you could know the number of events, no?

Thanks,
Namhyung


>
> >
> > [root@krava perf]# ./perf record -e 'sched:sched_switch' sleep 1 | ./perf report
> > # To display the perf.data header info, please use --header/--header-only options.
> >
> > SNIP
> >
> > #
> > # Overhead Command Shared Object Symbol
> > # ........ ....... ................. ..............
> > #
> > 100.00% sleep [kernel.kallsyms] [k] __schedule
> >
> >
> > also I've got another crash for (added -a option for above example):
> >
> > [root@krava perf]# ./perf record -e 'sched:sched_switch' -a sleep 1 | ./perf report
> > # To display the perf.data header info, please use --header/--header-only options.
> > #
> > [ perf record: Woken up 1 times to write data ]
> > [ perf record: Captured and wrote 0.000 MB (null) ]
> > Segmentation fault (core dumped)
> >
> > catchsegv got:
> > /home/jolsa/kernel/linux-perf/tools/perf/util/ordered-events.c:85(free_dup_event)[0x51a6a5]
> > ./perf(ordered_events__free+0x5c)[0x51b0b7]
> > /home/jolsa/kernel/linux-perf/tools/perf/util/session.c:1751(__perf_session__process_pipe_events)[0x518abb]
> > ./perf(perf_session__process_events+0x91)[0x5190f0]
> > /home/jolsa/kernel/linux-perf/tools/perf/builtin-report.c:598(__cmd_report)[0x443a91]
> > ./perf(cmd_report+0x169b)[0x4455a3]
> > /home/jolsa/kernel/linux-perf/tools/perf/perf.c:296(run_builtin)[0x4be1b0]
> > /home/jolsa/kernel/linux-perf/tools/perf/perf.c:348(handle_internal_command)[0x4be41d]
> > /home/jolsa/kernel/linux-perf/tools/perf/perf.c:395(run_argv)[0x4be56f]
> > ./perf(main+0x2d6)[0x4be949]
> > /lib64/libc.so.6(__libc_start_main+0xf1)[0x7f3de8a10401]
> > ./perf(_start+0x2a)[0x42831a]
> >
> > looks like some mem corruption.. will try to follow up
> > on this later if nobody beats me to it ;-)
>
> Cannot reproduce it in acme's perf/core building the tool with
> make NO_LIBPYTHON=1 LDFLAGS=-static
>
> If you have a file with the perf record output causing perf report's
> crash, I'd like to take a look.
>
> Thanks,
> David