Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver

From: Alexander Duyck
Date: Thu Oct 01 2015 - 19:43:30 EST


On 10/01/2015 04:39 PM, Stephen Hemminger wrote:
On Thu, 1 Oct 2015 16:03:06 -0700
Alexander Duyck <alexander.duyck@xxxxxxxxx> wrote:

On 10/01/2015 03:00 PM, Stephen Hemminger wrote:
On Thu, 1 Oct 2015 12:48:36 -0700
Alexander Duyck <alexander.duyck@xxxxxxxxx> wrote:

On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
On Thu, 1 Oct 2015 13:59:02 +0300
Avi Kivity <avi@xxxxxxxxxxxx> wrote:

On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
This is a new UIO device driver to allow supporting MSI-X and MSI devices
in userspace. It has been used in environments like VMware and older versions
of QEMU/KVM where no IOMMU support is available.
Why not add msi/msix support to uio_pci_generic?
That is possible but that would meet ABI and other resistance from the author.
Also, uio_pci_generic makes it harder to find resources since it doesn't fully
utilize UIO infrastructure.
I'd say you are better off actually taking this in the other direction.
From what I have seen it seems like this driver is meant to deal with
mapping VFs contained inside of guests. If you are going to fork off
and create a UIO driver for mapping VFs why not just make it specialize
in that. You could probably simplify the code by dropping support for
legacy interrupts and IO regions since all that is already covered by
uio_pci_generic anyway if I am not mistaken.

You could then look at naming it something like uio_vf since the uio_msi
is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
The support needs to cover:
- VF in guest
- VNIC in guest (vmxnet3)
it isn't just about VF's
I get that, but the driver you are talking about adding is duplicating
much of what is already there in uio_pci_generic. If nothing else it
might be worth while to look at replacing the legacy interrupt with
MSI. Maybe look at naming it something like uio_pcie to indicate that
we are focusing on assigning PCIe and virtual devices that support MSI
and MSI-X and use memory BARs rather than legacy PCI devices that are
doing things like mapping I/O BARs and using INTx signaling.

My main argument is that we should probably look at dropping support for
anything that isn't going to be needed. If it is really important we
can always add it later. I just don't see the value in having code
around for things we aren't likely to ever use with real devices as we
are stuck supporting it for the life of the driver. I'll go ahead and
provide a inline review of your patch 2/2 as I think my feedback might
make a bit more sense that way.
Ok, but having one driver that can deal with failures with msi-x vector
setup and fallback seemed like a better strategy.

Yes, but in the case of something like a VF it is going to just make a bigger mess of things since INTx doesn't work. So what would you expect your driver to do in that case? Also we have to keep in mind that the MSI-X failure case is very unlikely.

One other thing that just occurred to me is that you may want to try using the range allocation call instead of a hard set number of interrupts. Then if you start running short on vectors you don't hard fail and instead just allocate what you can.

- Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/