Re: [PATCH] perf_events: fix and improve x86 event scheduling

From: Peter Zijlstra
Date: Thu Nov 10 2011 - 09:37:53 EST


Just throwing this out there (hasn't event been compiled etc..).

The idea is to try the fixed counters first so that we don't
'accidentally' fill a GP counter with something that could have lived on
the fixed purpose one and then end up under utilizing the PMU that way.

It ought to solve the most common PMU programming fail on Intel
thingies.

---
Index: linux-2.6/arch/x86/kernel/cpu/perf_event.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/cpu/perf_event.c
+++ linux-2.6/arch/x86/kernel/cpu/perf_event.c
@@ -558,14 +558,22 @@ int x86_schedule_events(struct cpu_hw_ev
if (c->weight != w)
continue;

- for_each_set_bit(j, c->idxmsk, X86_PMC_IDX_MAX) {
+ if (x86_pmu.num_counters_fixed) {
+ j = X86_PMC_IDX_FIXED - 1;
+ for_each_set_bit_cont(j, c->idxmsk, X86_PMC_IDX_MAX) {
+ if (!test_bit(k, used_mask))
+ goto assign;
+ }
+ }
+
+ for_each_set_bit(j, c->idxmsk, X86_PMC_IDX_FIXED) {
if (!test_bit(j, used_mask))
- break;
+ goto assign;
}

- if (j == X86_PMC_IDX_MAX)
- break;
+ break;

+assign:
__set_bit(j, used_mask);

if (assign)
Index: linux-2.6/include/linux/bitops.h
===================================================================
--- linux-2.6.orig/include/linux/bitops.h
+++ linux-2.6/include/linux/bitops.h
@@ -26,6 +26,12 @@ extern unsigned long __sw_hweight64(__u6
(bit) < (size); \
(bit) = find_next_bit((addr), (size), (bit) + 1))

+#define for_each_set_bit_cont(bit, addr, size) \
+ for ((bit) = find_next_bit((addr), (size), (bit) + 1); \
+ (bit) < (size); \
+ (bit) = find_next_bit((addr), (size), (bit) + 1))
+
+
static __inline__ int get_bitmask_order(unsigned int count)
{
int order;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/