Re: [PATCH v7 2/2] PCI: hv: Add arm64 Hyper-V vPCI support

From: Marc Zyngier
Date: Tue Dec 28 2021 - 07:23:22 EST


On Mon, 27 Dec 2021 17:38:07 +0000,
"Michael Kelley (LINUX)" <mikelley@xxxxxxxxxxxxx> wrote:
>
> From: Sunil Muthuswamy <sunilmut@xxxxxxxxxxxxxxxxxxx> Sent: Friday, December 17, 2021 10:52 AM
> >
> > Add arm64 Hyper-V vPCI support by implementing the arch specific
> > interfaces. Introduce an IRQ domain and chip specific to Hyper-v vPCI that
> > is based on SPIs. The IRQ domain parents itself to the arch GIC IRQ domain
> > for basic vector management.
> >
> > Signed-off-by: Sunil Muthuswamy <sunilmut@xxxxxxxxxxxxx>
> > ---
> > In v2, v3, v4, v5, v6 & v7:
> > Changes are described in the cover letter.
> >
> > arch/arm64/include/asm/hyperv-tlfs.h | 9 +
> > drivers/pci/Kconfig | 2 +-
> > drivers/pci/controller/Kconfig | 2 +-
> > drivers/pci/controller/pci-hyperv.c | 241 ++++++++++++++++++++++++++-
> > 4 files changed, 251 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/hyperv-tlfs.h b/arch/arm64/include/asm/hyperv-tlfs.h
> > index 4d964a7f02ee..bc6c7ac934a1 100644
> > --- a/arch/arm64/include/asm/hyperv-tlfs.h
> > +++ b/arch/arm64/include/asm/hyperv-tlfs.h
> > @@ -64,6 +64,15 @@
> > #define HV_REGISTER_STIMER0_CONFIG 0x000B0000
> > #define HV_REGISTER_STIMER0_COUNT 0x000B0001
> >
> > +union hv_msi_entry {
> > + u64 as_uint64[2];
> > + struct {
> > + u64 address;
> > + u32 data;
> > + u32 reserved;
> > + } __packed;
> > +};
> > +
> > #include <asm-generic/hyperv-tlfs.h>
> >
> > #endif
> > diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
> > index 43e615aa12ff..d98fafdd0f99 100644
> > --- a/drivers/pci/Kconfig
> > +++ b/drivers/pci/Kconfig
> > @@ -184,7 +184,7 @@ config PCI_LABEL
> >
> > config PCI_HYPERV
> > tristate "Hyper-V PCI Frontend"
> > - depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
> > + depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
> > select PCI_HYPERV_INTERFACE
> > help
> > The PCI device frontend driver allows the kernel to import arbitrary
> > diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> > index 93b141110537..2536abcc045a 100644
> > --- a/drivers/pci/controller/Kconfig
> > +++ b/drivers/pci/controller/Kconfig
> > @@ -281,7 +281,7 @@ config PCIE_BRCMSTB
> >
> > config PCI_HYPERV_INTERFACE
> > tristate "Hyper-V PCI Interface"
> > - depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64
> > + depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN
> > help
> > The Hyper-V PCI Interface is a helper driver allows other drivers to
> > have a common interface with the Hyper-V PCI frontend driver.
> > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> > index ead7d6cb6bf1..02ba2e7e2618 100644
> > --- a/drivers/pci/controller/pci-hyperv.c
> > +++ b/drivers/pci/controller/pci-hyperv.c
> > @@ -47,6 +47,8 @@
> > #include <linux/msi.h>
> > #include <linux/hyperv.h>
> > #include <linux/refcount.h>
> > +#include <linux/irqdomain.h>
> > +#include <linux/acpi.h>
> > #include <asm/mshyperv.h>
> >
> > /*
> > @@ -614,7 +616,236 @@ static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
> > {
> > return pci_msi_prepare(domain, dev, nvec, info);
> > }
> > -#endif /* CONFIG_X86 */
> > +#elif defined(CONFIG_ARM64)
> > +/*
> > + * SPI vectors to use for vPCI; arch SPIs range is [32, 1019], but leaving a bit
> > + * of room at the start to allow for SPIs to be specified through ACPI and
> > + * starting with a power of two to satisfy power of 2 multi-MSI requirement.
> > + */
> > +#define HV_PCI_MSI_SPI_START 64
> > +#define HV_PCI_MSI_SPI_NR (1020 - HV_PCI_MSI_SPI_START)
> > +#define DELIVERY_MODE 0
> > +#define FLOW_HANDLER NULL
> > +#define FLOW_NAME NULL
> > +#define hv_msi_prepare NULL
> > +
> > +struct hv_pci_chip_data {
> > + DECLARE_BITMAP(spi_map, HV_PCI_MSI_SPI_NR);
> > + struct mutex map_lock;
> > +};
> > +
> > +/* Hyper-V vPCI MSI GIC IRQ domain */
> > +static struct irq_domain *hv_msi_gic_irq_domain;
> > +
> > +/* Hyper-V PCI MSI IRQ chip */
> > +static struct irq_chip hv_arm64_msi_irq_chip = {
> > + .name = "MSI",
> > + .irq_set_affinity = irq_chip_set_affinity_parent,
> > + .irq_eoi = irq_chip_eoi_parent,
> > + .irq_mask = irq_chip_mask_parent,
> > + .irq_unmask = irq_chip_unmask_parent
> > +};
> > +
> > +static unsigned int hv_msi_get_int_vector(struct irq_data *irqd)
> > +{
> > + return irqd->parent_data->hwirq;
> > +}
> > +
> > +static void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
> > + struct msi_desc *msi_desc)
> > +{
> > + msi_entry->address = ((u64)msi_desc->msg.address_hi << 32) |
> > + msi_desc->msg.address_lo;
> > + msi_entry->data = msi_desc->msg.data;
> > +}
> > +
> > +/*
> > + * @nr_bm_irqs: Indicates the number of IRQs that were allocated from
> > + * the bitmap.
> > + * @nr_dom_irqs: Indicates the number of IRQs that were allocated from
> > + * the parent domain.
> > + */
> > +static void hv_pci_vec_irq_free(struct irq_domain *domain,
> > + unsigned int virq,
> > + unsigned int nr_bm_irqs,
> > + unsigned int nr_dom_irqs)
> > +{
> > + struct hv_pci_chip_data *chip_data = domain->host_data;
> > + struct irq_data *d = irq_domain_get_irq_data(domain, virq);
>
> FWIW, irq_domain_get_irq_data() can return NULL. Maybe that's an
> error in the "should never happen" category. Throughout kernel code,
> some callers check for a NULL result, but a lot do not.

irq_domain_get_irq_data() returns NULL when there is no mapping. If
this happens here, then the allocation tracking has gone horribly
wrong, and I certainly want to see the resulting Oops rather than
papering over it.

M.

--
Without deviation from the norm, progress is not possible.