Re: [PATCH V2 04/23] perf/x86/intel: Support adaptive PEBSv4

From: Peter Zijlstra
Date: Thu Mar 21 2019 - 17:17:18 EST


On Thu, Mar 21, 2019 at 01:56:44PM -0700, kan.liang@xxxxxxxxxxxxxxx wrote:
> +static inline void *next_pebs_record(void *p)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + unsigned int size;
> +
> + if (x86_pmu.intel_cap.pebs_format < 4)
> + size = x86_pmu.pebs_record_size;
> + else
> + size = cpuc->pebs_record_size;
> + return p + size;
> +}

> @@ -1323,19 +1580,19 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
> if (base == NULL)
> return NULL;
>
> - for (at = base; at < top; at += x86_pmu.pebs_record_size) {
> - struct pebs_record_nhm *p = at;
> + for (at = base; at < top; at = next_pebs_record(at)) {
> + unsigned long status = get_pebs_status(at);

afaict we do not mix base and adaptive records, and thus the above
really could use cpuc->pebs_record_size unconditionally, right?