Re: [PATCH] drivers/perf: riscv_pmu_sbi: add support for PMU variant on T-Head C9xx cores
From: Anup Patel
Date: Thu Aug 18 2022 - 04:24:51 EST
On Thu, Aug 18, 2022 at 1:03 AM Atish Patra <atishp@xxxxxxxxxxxxxx> wrote:
>
> On Wed, Aug 17, 2022 at 4:13 AM Heiko Stuebner <heiko@xxxxxxxxx> wrote:
> >
> > With the T-HEAD C9XX cores being designed before or during the ratification
> > to the SSCOFPMF extension, they implement functionality very similar but
> > not equal to it. So add some adaptions to allow the C9XX to still handle
> > its PMU through the regular SBI PMU interface instead of defining new
> > interfaces or drivers.
> >
>
> IMO, vendor specific workarounds in the generic implementation is not
> a good idea.
> If we have to support it, I think we should just put the IRQ number in
> the DT and parse from the DT.
> The initial sscofpmf series was based on the DT. It was removed later
> as there was no need for it at that time.
> We can always revive it.
>
> Regarding the CSR number difference and static key enablement, can we
> use code patching techniques here as well ?
> At least all the T-HEAD C9XX core erratas are in one place.
>
> The alternate would be just to say T-HEAD C9XX support SSCOFPMF but
> with erratas. I don't prefer this approach
> but it keeps the vendor specific checks out of the generic code.
Whether to have a DT node (or not) was already discussed and concluded
in the past.
We don't need a DT node just to get the IRQ number. The T-HEAD custom
IRQ number can be derived based on mvendorid.
Also, all these T-HEAD specific changes in SBI PMU driver should be
implemented as erratas using ALTERNATIVE() macros.
Regards,
Anup
>
> > To work properly, this requires a matching change in SBI, though the actual
> > interface between kernel and SBI does not change.
> >
>
> Do you have a working OpenSBI implementation for this ?
>
> > The main differences are a the overflow CSR and irq number.
> >
> > Signed-off-by: Heiko Stuebner <heiko@xxxxxxxxx>
> > ---
> > drivers/perf/riscv_pmu_sbi.c | 27 +++++++++++++++++++--------
> > 1 file changed, 19 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
> > index 6f6681bbfd36..4589166e0de4 100644
> > --- a/drivers/perf/riscv_pmu_sbi.c
> > +++ b/drivers/perf/riscv_pmu_sbi.c
> > @@ -41,12 +41,17 @@ static const struct attribute_group *riscv_pmu_attr_groups[] = {
> > NULL,
> > };
> >
> > +#define THEAD_C9XX_RV_IRQ_PMU 17
> > +#define THEAD_C9XX_CSR_SCOUNTEROF 0x5c5
> > +
> > /*
> > * RISC-V doesn't have hetergenous harts yet. This need to be part of
> > * per_cpu in case of harts with different pmu counters
> > */
> > static union sbi_pmu_ctr_info *pmu_ctr_list;
> > +static unsigned int riscv_pmu_irq_num;
> > static unsigned int riscv_pmu_irq;
> > +static bool is_thead_c9xx;
> >
> > struct sbi_pmu_event_data {
> > union {
> > @@ -575,7 +580,7 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev)
> > fidx = find_first_bit(cpu_hw_evt->used_hw_ctrs, RISCV_MAX_COUNTERS);
> > event = cpu_hw_evt->events[fidx];
> > if (!event) {
> > - csr_clear(CSR_SIP, SIP_LCOFIP);
> > + csr_clear(CSR_SIP, BIT(riscv_pmu_irq_num));
> > return IRQ_NONE;
> > }
> >
> > @@ -583,13 +588,14 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev)
> > pmu_sbi_stop_hw_ctrs(pmu);
> >
> > /* Overflow status register should only be read after counter are stopped */
> > - overflow = csr_read(CSR_SSCOUNTOVF);
> > + overflow = !is_thead_c9xx ? csr_read(CSR_SSCOUNTOVF)
> > + : csr_read(THEAD_C9XX_CSR_SCOUNTEROF);
> >
> > /*
> > * Overflow interrupt pending bit should only be cleared after stopping
> > * all the counters to avoid any race condition.
> > */
> > - csr_clear(CSR_SIP, SIP_LCOFIP);
> > + csr_clear(CSR_SIP, BIT(riscv_pmu_irq_num));
> >
> > /* No overflow bit is set */
> > if (!overflow)
> > @@ -653,8 +659,8 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
> >
> > if (riscv_isa_extension_available(NULL, SSCOFPMF)) {
> > cpu_hw_evt->irq = riscv_pmu_irq;
> > - csr_clear(CSR_IP, BIT(RV_IRQ_PMU));
> > - csr_set(CSR_IE, BIT(RV_IRQ_PMU));
> > + csr_clear(CSR_IP, BIT(riscv_pmu_irq_num));
> > + csr_set(CSR_IE, BIT(riscv_pmu_irq_num));
> > enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE);
> > }
> >
> > @@ -665,7 +671,7 @@ static int pmu_sbi_dying_cpu(unsigned int cpu, struct hlist_node *node)
> > {
> > if (riscv_isa_extension_available(NULL, SSCOFPMF)) {
> > disable_percpu_irq(riscv_pmu_irq);
> > - csr_clear(CSR_IE, BIT(RV_IRQ_PMU));
> > + csr_clear(CSR_IE, BIT(riscv_pmu_irq_num));
> > }
> >
> > /* Disable all counters access for user mode now */
> > @@ -681,7 +687,11 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
> > struct device_node *cpu, *child;
> > struct irq_domain *domain = NULL;
> >
> > - if (!riscv_isa_extension_available(NULL, SSCOFPMF))
> > + is_thead_c9xx = (sbi_get_mvendorid() == THEAD_VENDOR_ID &&
> > + sbi_get_marchid() == 0 &&
> > + sbi_get_mimpid() == 0);
> > +
> > + if (!riscv_isa_extension_available(NULL, SSCOFPMF) && !is_thead_c9xx)
> > return -EOPNOTSUPP;
> >
> > for_each_of_cpu_node(cpu) {
> > @@ -703,7 +713,8 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
> > return -ENODEV;
> > }
> >
> > - riscv_pmu_irq = irq_create_mapping(domain, RV_IRQ_PMU);
> > + riscv_pmu_irq_num = !is_thead_c9xx ? RV_IRQ_PMU : THEAD_C9XX_RV_IRQ_PMU;
> > + riscv_pmu_irq = irq_create_mapping(domain, riscv_pmu_irq_num);
> > if (!riscv_pmu_irq) {
> > pr_err("Failed to map PMU interrupt for node\n");
> > return -ENODEV;
> > --
> > 2.35.1
> >
>
>
> --
> Regards,
> Atish
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/linux-riscv