Re: [PATCH 1/3] perf, x86: Blacklist all MEM_*_RETIRED events for IVB

From: Stephane Eranian
Date: Thu May 16 2013 - 11:42:27 EST

On Wed, May 15, 2013 at 6:51 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Wed, May 15, 2013 at 04:20:46PM +0200, Stephane Eranian wrote:
>> Hi,
>> Based on our testing, it appears the corruption occurs only
>> when the MEM_* events are used and only on the sibling
>> counter. In other words, if HT0 has MEM_* in cntr0, then
>> HT1 cntr0 cannot be used, otherwise whatever is there may
>> get corrupted.
> Ah, great. That is the least horrid case :-)

>> So I think we could enhance Andi's initial patch
>> to handle this case instead of blacklist those events. They are
>> very important events.
> Will you take a stab at it? I suppose you'll have to make each counter
> have a shared resource and have the mem_*_retired events mark the
> resource taken.
Yes, I will try to come up with a patch to force multiplexing when those
events are used.

The difficulty is that we cannot use the shared_regs infrastructure because
the constraint is quite different. Here we need to say that if a mem event
uses cnt0 on one thread, then nothing can be using cnt0 on the other thread.
The current shared_regs infrastructure does not work on the counter level.
So we need to invent something else. It cannot even be extended. We need
to schedule the mem event, then look at the assignment and exclude all
the counters used by the mem event from the other thread. Alternatively,
we can only schedule mem event on counters which are not currently used
by the sibling thread. We need to peek at the used counters cross HT threads.
Not so easy to do.

> Also, did you test what happens when you turn SMT off in the BIOS so you
> get double the amount of counters; do we then leak into CNTn+4 or is all
> well again?
I have not tried that yet. Hopefully, it won't exhibit the problem.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at