Re: [PATCH RESEND v2 8/8] KVM: x86/svm/pmu: Rewrite get_gp_pmc_amd() for more counters scalability

From: Sean Christopherson
Date: Tue Aug 30 2022 - 14:24:45 EST


On Tue, Aug 23, 2022, Like Xu wrote:
> static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
> enum pmu_type type)
> {
> struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
> + unsigned int idx;
>
> if (!vcpu->kvm->arch.enable_pmu)
> return NULL;
>
> switch (msr) {
> - case MSR_F15H_PERF_CTL0:
> - case MSR_F15H_PERF_CTL1:
> - case MSR_F15H_PERF_CTL2:
> - case MSR_F15H_PERF_CTL3:
> - case MSR_F15H_PERF_CTL4:
> - case MSR_F15H_PERF_CTL5:
> + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
> if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
> return NULL;
> - fallthrough;

> + idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2);

> + if ((msr == (MSR_F15H_PERF_CTL0 + 2 * idx)) !=
> + (type == PMU_TYPE_EVNTSEL))

This is more complicated than it needs to be. CTLn is even, CTRn is odd (I think
I got the logic right, but the below is untested).

And this all needs a comment.


/*
* Each PMU counter has a pair of CTL and CTR MSRs. CTLn MSRs
* (accessed via EVNTSEL) are even, CTRn MSRs are odd.
*/
idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2);
if (!(msr & 0x1) != (type == PMU_TYPE_EVNTSEL))
return NULL;

> + return NULL;
> + break;