Re: perf overlapping maps...

From: Don Zickus
Date: Mon Oct 22 2018 - 10:07:42 EST


(adding Jiri)

On Fri, Oct 19, 2018 at 09:44:01PM -0700, David Miller wrote:
> From: David Miller <davem@xxxxxxxxxxxxx>
> Date: Fri, 19 Oct 2018 21:05:49 -0700 (PDT)
>
> > One solution I've come up with is:
> >
> > 1) When synthesizing a fork event, set PERF_RECORD_MISC_COMM_EXEC in
> > header->misc.
> >
> > 2) Use this to elide the map groups clone in
> > thread__clone_map_groups().
>
> Looking into code history, I notice:
>
> commit 363b785f3805a2632eb09a8b430842461c21a640
> Author: Don Zickus <dzickus@xxxxxxxxxx>
> Date: Fri Mar 14 10:43:44 2014 -0400
>
> perf tools: Speed up thread map generation
>
> and the subsequent:
>
> commit 4aa5f4f7bb8bc41cba15bcd0d80c4fb085027d6b
> Author: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
> Date: Fri Feb 27 19:52:10 2015 -0300
>
> perf tools: Fix FORK after COMM when synthesizing records for pre-existing threads
>
> If Don wanted to have the map cloning to happen for processes without
> CLONE_VM, I'm not sure that's right.
>
> For real threads, we just take a reference to the map group from
> the parent.
>
> Don, a quick summary. If we synthesize a fork event, let's say for an
> emacs process. perf will clone the map groups of the parent bash
> shell which invoked emacs. Via:
>
> thread__fork(thread, parent, timestamp)
> {
> ...
> thread__clone_map_groups(thread, parent)
> {
> ...
> map_groups__clone(thread, parent->mg)
>
> Which is completely bogus. It brings all of the bash process maps
> into the emacs thread map group. Then we process the emacs mmap2
> events, which overlap the bash process maps already cloned into the
> emacs map group. And this make all kinds of erroneous things happen.
>
> I'm suggesting to elide the map groups clone in this situation where
> we are synthesizing the fork.

Hi David,

Honestly, I remember very little of this change other than we ran specjbb
which created thousands of threads and we wanted a better way to handle that
situation (waiting 15 minutes seemed wrong).

Jiri Olsa is probably more knowledgable about this then I am these days and
can work with Joe to re-do the test to verify any suggested changes does not
break our intended use case.

Thinking about it more, I am wondering if we did this because we ran the
test and it takes about 20 minutes to 'warm up' then we attached perf to the
test. This implies we had to handle the situation where 10K threads already
existed hence our optimization. But I can be wrong.

Your suggestion is probably right and I am sure we can reproduce the
scenario to verify things didn't regress.

Cheers,
Don