Re: [PATCH v10] perf: Sharing PMU counters across compatible events

From: Song Liu
Date: Wed Mar 04 2020 - 16:58:19 EST




> On Mar 4, 2020, at 8:48 AM, Song Liu <songliubraving@xxxxxx> wrote:
>
> Hi Peter,
>
>> On Feb 28, 2020, at 1:46 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>
>> On Fri, Feb 28, 2020 at 10:36:04AM +0100, Peter Zijlstra wrote:
>>> +
>>> + /*
>>> + * Flip an active event to a new master; this is tricky because
>>> + * for an active event event_pmu_read() can be called at any
>>> + * time from NMI context.
>>> + *
>>> + * This means we need to have ->dup_master and
>>> + * ->dup_count consistent at all times. Of course we cannot do
>>> + * two writes at once :/
>>> + *
>>> + * Instead, flip ->dup_master to EVENT_TOMBSTONE, this will
>>> + * make event_pmu_read_dup() NOP. Then we can set
>>> + * ->dup_count and finally set ->dup_master to the new_master
>>> + * to let event_pmu_read_dup() rip.
>>> + */
>>> + WRITE_ONCE(tmp->dup_master, EVENT_TOMBSTONE);
>>> + barrier();
>>> +
>>> + count = local64_read(&new_master->count);
>>> + local64_set(&tmp->dup_count, count);
>>> +
>>> + if (tmp == new_master)
>>> + local64_set(&tmp->master_count, count);
>>> +
>>> + barrier();
>>> + WRITE_ONCE(tmp->dup_master, new_master);
>>> dup_count++;
>>
>>> @@ -4453,12 +4484,14 @@ static void __perf_event_read(void *info
>>>
>>> static inline u64 perf_event_count(struct perf_event *event)
>>> {
>>> - if (event->dup_master == event) {
>>> - return local64_read(&event->master_count) +
>>> - atomic64_read(&event->master_child_count);
>>> - }
>>> + u64 count;
>>>
>>> - return local64_read(&event->count) + atomic64_read(&event->child_count);
>>> + if (likely(event->dup_master != event))
>>> + count = local64_read(&event->count);
>>> + else
>>> + count = local64_read(&event->master_count);
>>> +
>>> + return count + atomic64_read(&event->child_count);
>>> }
>>>
>>> /*
>>
>> One thing that I've failed to mention so far (but has sorta been implied
>> if you thought carefully) is that ->dup_master and ->master_count also
>> need to be consistent at all times. Even !ACTIVE events can have
>> perf_event_count() called on them.
>>
>> Worse; I just realize that perf_event_count() is called remotely, so we
>> need SMP ordering between reading ->dup_master and ->master_count
>> *groan*....
>
> Thanks for all these fixes! I run some tests with these changes. It works
> well in general, with a few minor things to improve:
>
> 1. Current perf_event_compatible() doesn't get full potential of sharing.
> Many bits in perf_event_attr doesn't really matter for sharing, e.g.,
> disabled, inherit, etc. I guess we can take a closer look at it after
> fixing the core logic.
>
> 2. There is something wrong with cgroup events, that the first reading
> of perf-stat is sometimes not accurate. But this also happens without
> PMU sharing. I will debug that separately.
>
> 3. I guess we still need to handle SMP ordering in perf_event_count(). I
> haven't got into it yet.

I guess the following is sufficient for the SMP ordering with
perf_event_count()?

diff --git a/kernel/events/core.c b/kernel/events/core.c
index b91956276fee..83a263c85f42 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
/* tear down dup_master, no more sharing for this event */
@@ -1715,14 +1713,13 @@ static void perf_event_exit_dup_master(struct perf_event *event)
WARN_ON_ONCE(event->state < PERF_EVENT_STATE_OFF ||
event->state > PERF_EVENT_STATE_INACTIVE);

+ /* restore event->count and event->child_count */
+ local64_set(&event->count, local64_read(&event->master_count));
+
event->dup_active = 0;
WRITE_ONCE(event->dup_master, NULL);

barrier();
-
- /* restore event->count and event->child_count */
- local64_set(&event->count, local64_read(&event->master_count));
}

Thanks,
Song