Re: [Ptools-perfapi] [perfmon2] [PATCH] perf_events: AMD event scheduling (v1)

From: stephane eranian
Date: Fri Jan 22 2010 - 10:25:16 EST


On Fri, Jan 22, 2010 at 4:22 PM, Dan Terpstra <terpstra@xxxxxxxxxxxx> wrote:
> Excellent!
> Now I'd love to see equivalent functionality on Nehalem!

You mean for uncore PMU, right?
The idea is that the same approach can be used. Just need to
agree on the encoding of the event.

> - dan
>
>> -----Original Message-----
>> From: Stephane Eranian [mailto:eranian@xxxxxxxxxx]
>> Sent: Friday, January 22, 2010 5:43 AM
>> To: linux-kernel@xxxxxxxxxxxxxxx
>> Cc: perfmon2-devel@xxxxxxxxxxxx; eranian@xxxxxxxxx; peterz@xxxxxxxxxxxxx;
>> fweisbec@xxxxxxxxx; eranian@xxxxxxxxxx; paulus@xxxxxxxxx; mingo@xxxxxxx;
>> davem@xxxxxxxxxxxxx
>> Subject: [perfmon2] [PATCH] perf_events: AMD event scheduling (v1)
>>
>>
>> Â Â Â This patch adds correct AMD Northbridge event scheduling.
>> Â Â Â It must be applied on top of my v5 + v6 incremental event
>> Â Â Â scheduling patch.
>>
>> Â Â Â AMD Northbridge (NB) events measure L3 and Hypertransport
>> Â Â Â activities. ÂThere is a documented restriction on how NB
>> Â Â Â events can be programmed (refer to BKDG section 3.12).
>>
>> Â Â Â No two cores can use the same counter to measure NB events.
>> Â Â Â This patch implements this restriction by maintaining a per
>> Â Â Â Northbridge counter allocation table. All cores attached to
>> Â Â Â the same NB compete to allocate NB events. Given that you have
>> Â Â Â 4 counters, this means that at most 1 NB event can be measured by
>> Â Â Â all cores. The better alternative is to measure all NB events
>> Â Â Â from a single core. Both approaches are possible using this patch.
>> Â Â Â If there is more NB events than there are counters, some NB events
>> Â Â Â will not be scheduled, e.g., 2 NB events on each core on a 4-core
>> Â Â Â package.
>>
>> Â Â Â The patch also takes care of hotplug CPU.
>>
>> Â Â Â Signed-off-by: Stephane Eranian <eranian@xxxxxxxxxx>
>>
>> --
>> Âarch/x86/kernel/cpu/perf_event.c | Â252
>> ++++++++++++++++++++++++++++++++++++++-
>> Âkernel/perf_event.c       Â|  Â5
>> Â2 files changed, 254 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/perf_event.c
>> b/arch/x86/kernel/cpu/perf_event.c
>> index a961b1f..a97a744 100644
>> --- a/arch/x86/kernel/cpu/perf_event.c
>> +++ b/arch/x86/kernel/cpu/perf_event.c
>> @@ -69,6 +69,12 @@ struct debug_store {
>> Â Â Â u64 Â Â pebs_event_reset[MAX_PEBS_EVENTS];
>> Â};
>>
>> +struct amd_nb {
>> + Â Â Â Âint nb_id; /* Northbridge id */
>> + Â Â int refcnt; /* refernce count */
>> + Â Â struct perf_event *owners[X86_PMC_IDX_MAX];
>> +};
>> +
>> Â#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64))
>>
>> Âstruct event_constraint {
>> @@ -89,6 +95,7 @@ struct cpu_hw_events {
>>    int           assign[X86_PMC_IDX_MAX]; /* event to counter
>> assignment */
>> Â Â Â u64 Â Â Â Â Â Â Â Â Â Â tags[X86_PMC_IDX_MAX];
>>    struct perf_event    *event_list[X86_PMC_IDX_MAX]; /* in enabled
> order
>> */
>> +   struct amd_nb      *amd_nb;
>> Â};
>>
>> Â#define EVENT_CONSTRAINT(c, n, m) { \
>> @@ -134,6 +141,8 @@ struct x86_pmu {
>>
>> Âstatic struct x86_pmu x86_pmu __read_mostly;
>>
>> +static raw_spinlock_t amd_nb_lock;
>> +
>> Âstatic DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) = {
>> Â Â Â .enabled = 1,
>> Â};
>> @@ -2199,12 +2208,144 @@ static void intel_get_event_constraints(struct
>> cpu_hw_events *cpuc,
>> Â Â Â bitmap_fill((unsigned long *)idxmsk, x86_pmu.num_events);
>> Â}
>>
>> +/*
>> + * AMD64 events are detected based on their event codes.
>> + */
>> +static inline int amd_is_nb_event(struct hw_perf_event *hwc)
>> +{
>> + Â Â u64 val = hwc->config;
>> + Â Â Â Â/* event code : bits [35-32] | [7-0] */
>> + Â Â Â Âval = (val >> 24) | ( val & 0xff);
>> + Â Â Â Âreturn val >= 0x0e0;
>> +}
>> +
>> +static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
>> + Â Â Â Â Â Â Â Â Â Â Â Â Â Âstruct perf_event *event)
>> +{
>> + Â Â struct hw_perf_event *hwc = &event->hw;
>> + Â Â struct perf_event *old;
>> + Â Â struct amd_nb *nb;
>> + Â Â int i;
>> +
>> + Â Â /*
>> + Â Â Â* only care about NB events
>> + Â Â Â*/
>> + Â Â if(!amd_is_nb_event(hwc))
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â /*
>> + Â Â Â* NB not initialized
>> + Â Â Â*/
>> + Â Â nb = cpuc->amd_nb;
>> + Â Â if (!nb)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â if (hwc->idx == -1)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â /*
>> + Â Â Â* need to scan whole list because event may not have
>> + Â Â Â* been assigned during scheduling
>> + Â Â Â*/
>> + Â Â for(i=0; i < x86_pmu.num_events; i++) {
>> + Â Â Â Â Â Â if (nb->owners[i] == event) {
>> + Â Â Â Â Â Â Â Â Â Â old = cmpxchg(nb->owners+i, event, NULL);
>> + Â Â Â Â Â Â Â Â Â Â WARN_ON(old != event);
>> + Â Â Â Â Â Â Â Â Â Â return;
>> + Â Â Â Â Â Â }
>> + Â Â }
>> +}
>> +
>> +/*
>> + * AMD64 Northbridge events need special treatment because
>> + * counter access needs to be synchronized across all cores
>> + * of a package. Refer to BKDG section 3.12
>> + *
>> + * NB events are events measuring L3 cache, Hypertransport
>> + * traffic. They are identified by an event code Â>= 0xe0.
>> + *
>> + * No two cores can be measuring NB events using the same
>> + * counter. In other words, for NB events, it is as if there
>> + * was only one set of counters per package (or cores sharing
>> + * the same NB). Thus, we need to maintain a per-NB * allocation
>> + * table. The available slot is propagated using the bitmask.
>> + * We provide only one choice for each NB events based on
>> + * the fact that only NB events have restrictions. Consequently,
>> + * if a counter is available, there is a guarantee the NB event
>> + * will be assigned to it. If no slot is available, an empty
>> + * bitmask is returned and scheduling fails.
>> + *
>> + * Note that all cores attached the same NB compete for the same
>> + * counters to host NB events, this is why we use atomic ops.
>> + *
>> + * Given that resources are allocated (cmpxchg), they must be
>> + * eventually freed for others to use. This is accomplished by
>> + * calling amd_put_event_constraints().
>> + *
>> + * Non NB events are not impacted by this restriction.
>> + */
>> Âstatic void amd_get_event_constraints(struct cpu_hw_events *cpuc,
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct perf_event *event,
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â u64 *idxmsk)
>> Â{
>> - Â Â /* no constraints, means supports all generic counters */
>> - Â Â bitmap_fill((unsigned long *)idxmsk, x86_pmu.num_events);
>> + Â Â struct hw_perf_event *hwc = &event->hw;
>> + Â Â struct amd_nb *nb = cpuc->amd_nb;
>> + Â Â struct perf_event *old = NULL;
>> + Â Â int max = x86_pmu.num_events;
>> + Â Â int i, j, k = -1;
>> +
>> + Â Â /*
>> + Â Â Â* clean up vector
>> + Â Â Â*/
>> + Â Â bitmap_zero((unsigned long *)idxmsk, X86_PMC_IDX_MAX);
>> +
>> + Â Â /*
>> + Â Â Â* if not NB event or no NB, then no constraints
>> + Â Â Â*/
>> + Â Â if (!amd_is_nb_event(hwc) || !nb) {
>> + Â Â Â Â Â Â bitmap_fill((unsigned long *)idxmsk, x86_pmu.num_events);
>> + Â Â Â Â Â Â return;
>> + Â Â }
>> + Â Â /*
>> + Â Â Â* detect if already present, if so reuse
>> + Â Â Â*
>> + Â Â Â* cannot merge with actual allocation
>> + Â Â Â* because of possible holes
>> + Â Â Â*
>> + Â Â Â* event can already be present yet not assigned (in hwc->idx)
>> + Â Â Â* because of successive calls to x86_schedule_events() from
>> + Â Â Â* hw_perf_group_sched_in() without hw_perf_enable()
>> + Â Â Â*/
>> + Â Â for(i=0; i < max; i++) {
>> + Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â* keep track of first free slot
>> + Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â if (k == -1 && !nb->owners[i])
>> + Â Â Â Â Â Â Â Â Â Â k = i;
>> +
>> + Â Â Â Â Â Â /* already present, reuse */
>> + Â Â Â Â Â Â if (nb->owners[i] == event)
>> + Â Â Â Â Â Â Â Â Â Â goto skip;
>> + Â Â }
>> + Â Â /*
>> + Â Â Â* not present, so grab a new slot
>> + Â Â Â*
>> + Â Â Â* try to alllcate same counter as before if
>> + Â Â Â* event has already been assigned once. Otherwise,
>> + Â Â Â* try to use free counter k obtained during the 1st
>> + Â Â Â* pass above.
>> + Â Â Â*/
>> + Â Â i = j = hwc->idx != -1 ? hwc->idx : (k == -1 ? 0 : k);
>> + Â Â do {
>> + Â Â Â Â Â Â old = cmpxchg(nb->owners+i, NULL, event);
>> + Â Â Â Â Â Â if (!old)
>> + Â Â Â Â Â Â Â Â Â Â break;
>> + Â Â Â Â Â Â if (++i == x86_pmu.num_events)
>> + Â Â Â Â Â Â Â Â Â Â i = 0;
>> + Â Â } while (i != j);
>> +skip:
>> + Â Â if (!old)
>> + Â Â Â Â Â Â set_bit(i, (unsigned long *)idxmsk);
>> Â}
>>
>> Âstatic int x86_event_sched_in(struct perf_event *event,
>> @@ -2394,7 +2535,8 @@ static __initconst struct x86_pmu amd_pmu = {
>>    .apic          = 1,
>> Â Â Â /* use highest bit to detect overflow */
>>    .max_period       = (1ULL << 47) - 1,
>> - Â Â .get_event_constraints Â= amd_get_event_constraints
>> + Â Â .get_event_constraints Â= amd_get_event_constraints,
>> + Â Â .put_event_constraints Â= amd_put_event_constraints
>> Â};
>>
>> Âstatic __init int p6_pmu_init(void)
>> @@ -2501,6 +2643,87 @@ static __init int intel_pmu_init(void)
>> Â Â Â return 0;
>> Â}
>>
>> +static struct amd_nb *amd_alloc_nb(int cpu, int nb_id)
>> +{
>> + Â Â Â Âstruct amd_nb *nb;
>> +
>> + Â Â Â Ânb= vmalloc_node(sizeof(struct amd_nb), cpu_to_node(cpu));
>> + Â Â Â Âif (!nb)
>> + Â Â Â Â Â Â Â Âreturn NULL;
>> +
>> + Â Â Â Âmemset(nb, 0, sizeof(*nb));
>> + Â Â Â Ânb->nb_id = nb_id;
>> + Â Â Â Âreturn nb;
>> +}
>> +
>> +static void amd_pmu_cpu_online(int cpu)
>> +{
>> + Â Â struct cpu_hw_events *cpu1, *cpu2;
>> + Â Â struct amd_nb *nb = NULL;
>> + Â Â int i, nb_id;
>> +
>> + Â Â if (boot_cpu_data.x86_max_cores < 2)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â /*
>> + Â Â Â* function may be called too early in the
>> + Â Â Â* boot process, in which case nb_id is bogus
>> + Â Â Â*
>> + Â Â Â* for BSP, there is an explicit call from
>> + Â Â Â* amd_pmu_init()
>> + Â Â Â*/
>> + Â Â nb_id = amd_get_nb_id(cpu);
>> + Â Â if (nb_id == BAD_APICID)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â cpu1 = &per_cpu(cpu_hw_events, cpu);
>> + Â Â cpu1->amd_nb = NULL;
>> +
>> + Â Â raw_spin_lock(&amd_nb_lock);
>> +
>> + Â Â for_each_online_cpu(i) {
>> + Â Â Â Â Â Â cpu2 = &per_cpu(cpu_hw_events, i);
>> + Â Â Â Â Â Â nb = cpu2->amd_nb;
>> + Â Â Â Â Â Â if (!nb)
>> + Â Â Â Â Â Â Â Â Â Â continue;
>> + Â Â Â Â Â Â if (nb->nb_id == nb_id)
>> + Â Â Â Â Â Â Â Â Â Â goto found;
>> + Â Â }
>> +
>> + Â Â nb = amd_alloc_nb(cpu, nb_id);
>> + Â Â if (!nb) {
>> + Â Â Â Â Â Â pr_err("perf_events: failed to allocate NB storage for
>> CPU%d\n", cpu);
>> + Â Â Â Â Â Â raw_spin_unlock(&amd_nb_lock);
>> + Â Â Â Â Â Â return;
>> + Â Â }
>> +found:
>> + Â Â nb->refcnt++;
>> + Â Â cpu1->amd_nb = nb;
>> +
>> + Â Â raw_spin_unlock(&amd_nb_lock);
>> +
>> + Â Â pr_info("CPU%d NB%d ref=%d\n", cpu, nb_id, nb->refcnt);
>> +}
>> +
>> +static void amd_pmu_cpu_offline(int cpu)
>> +{
>> + Â Â struct cpu_hw_events *cpuhw;
>> +
>> + Â Â if (boot_cpu_data.x86_max_cores < 2)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â cpuhw = &per_cpu(cpu_hw_events, cpu);
>> +
>> + Â Â raw_spin_lock(&amd_nb_lock);
>> +
>> + Â Â if (--cpuhw->amd_nb->refcnt == 0)
>> + Â Â Â Â Â Â vfree(cpuhw->amd_nb);
>> +
>> + Â Â cpuhw->amd_nb = NULL;
>> +
>> + Â Â raw_spin_unlock(&amd_nb_lock);
>> +}
>> +
>> Âstatic __init int amd_pmu_init(void)
>> Â{
>> Â Â Â /* Performance-monitoring supported from K7 and later: */
>> @@ -2513,6 +2736,8 @@ static __init int amd_pmu_init(void)
>> Â Â Â memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
>> Â Â Â Â Â Â Âsizeof(hw_cache_event_ids));
>>
>> + Â Â /* initialize BSP */
>> + Â Â amd_pmu_cpu_online(smp_processor_id());
>> Â Â Â return 0;
>> Â}
>>
>> @@ -2842,4 +3067,25 @@ struct perf_callchain_entry *perf_callchain(struct
>> pt_regs *regs)
>> Âvoid hw_perf_event_setup_online(int cpu)
>> Â{
>> Â Â Â init_debug_store_on_cpu(cpu);
>> +
>> + Â Â switch (boot_cpu_data.x86_vendor) {
>> + Â Â case X86_VENDOR_AMD:
>> + Â Â Â Â Â Â amd_pmu_cpu_online(cpu);
>> + Â Â Â Â Â Â break;
>> + Â Â default:
>> + Â Â Â Â Â Â return;
>> + Â Â }
>> +}
>> +
>> +void hw_perf_event_setup_offline(int cpu)
>> +{
>> + Â Â init_debug_store_on_cpu(cpu);
>> +
>> + Â Â switch (boot_cpu_data.x86_vendor) {
>> + Â Â case X86_VENDOR_AMD:
>> + Â Â Â Â Â Â amd_pmu_cpu_offline(cpu);
>> + Â Â Â Â Â Â break;
>> + Â Â default:
>> + Â Â Â Â Â Â return;
>> + Â Â }
>> Â}
>> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
>> index 27f69a0..20f212e 100644
>> --- a/kernel/perf_event.c
>> +++ b/kernel/perf_event.c
>> @@ -98,6 +98,7 @@ void __weak hw_perf_enable(void) Â Â Â Â Â Â{ barrier();
> }
>>
>> Âvoid __weak hw_perf_event_setup(int cpu) Â Â { barrier(); }
>> Âvoid __weak hw_perf_event_setup_online(int cpu) Â Â Â{ barrier(); }
>> +void __weak hw_perf_event_setup_offline(int cpu){ barrier(); }
>>
>> Âint __weak
>> Âhw_perf_group_sched_in(struct perf_event *group_leader,
>> @@ -5251,6 +5252,10 @@ perf_cpu_notify(struct notifier_block *self,
>> unsigned long action, void *hcpu)
>> Â Â Â Â Â Â Â perf_event_exit_cpu(cpu);
>> Â Â Â Â Â Â Â break;
>>
>> + Â Â Â Âcase CPU_DEAD:
>> + Â Â Â Â Â Â Â Âhw_perf_event_setup_offline(cpu);
>> + Â Â Â Â Â Â Â Âbreak;
>> +
>> Â Â Â default:
>> Â Â Â Â Â Â Â break;
>> Â Â Â }
>>
>> --------------------------------------------------------------------------
>> ----
>> Throughout its 18-year history, RSA Conference consistently attracts the
>> world's best and brightest in the field, creating opportunities for
>> Conference
>> attendees to learn about information security's most important issues
>> through
>> interactions with peers, luminaries and emerging and established
>> companies.
>> http://p.sf.net/sfu/rsaconf-dev2dev
>> _______________________________________________
>> perfmon2-devel mailing list
>> perfmon2-devel@xxxxxxxxxxxxxxxxxxxxx
>> https://lists.sourceforge.net/lists/listinfo/perfmon2-devel
>
> _______________________________________________
> Ptools-perfapi mailing list
> Ptools-perfapi@xxxxxxxxxxxx
> http://lists.eecs.utk.edu/mailman/listinfo/ptools-perfapi
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/