Re: [PATCH v3 2/3] uio_pci_generic: add MSI/MSI-X support

From: Alex Williamson
Date: Wed Oct 07 2015 - 12:31:13 EST


On Wed, 2015-10-07 at 09:52 +0300, Avi Kivity wrote:
>
> On 10/06/2015 09:51 PM, Alex Williamson wrote:
> > On Tue, 2015-10-06 at 18:23 +0300, Avi Kivity wrote:
> >> On 10/06/2015 05:56 PM, Michael S. Tsirkin wrote:
> >>> On Tue, Oct 06, 2015 at 05:43:50PM +0300, Vlad Zolotarov wrote:
> >>>> The only "like VFIO" behavior we implement here is binding the MSI-X
> >>>> interrupt notification to eventfd descriptor.
> >>> There will be more if you add some basic memory protections.
> >>>
> >>> Besides, that's not true.
> >>> Your patch queries MSI capability, sets # of vectors.
> >>> You even hinted you want to add BAR mapping down the road.
> >> BAR mapping is already available from sysfs; it is not mandatory.
> >>
> >>> VFIO does all of that.
> >>>
> >> Copying vfio maintainer Alex (hi!).
> >>
> >> vfio's charter is modern iommu-capable configurations. It is designed to
> >> be secure enough to be usable by an unprivileged user.
> >>
> >> For performance and hardware reasons, many dpdk deployments use
> >> uio_pci_generic. They are willing to trade off the security provided by
> >> vfio for the performance and deployment flexibility of pci_uio_generic.
> >> Forcing these features into vfio will compromise its security and
> >> needlessly complicate its code (I guess it can be done with a "null"
> >> iommu, but then vfio will have to decide whether it is secure or not).
> > It's not just the iommu model vfio uses, it's that vfio is built around
> > iommu groups. For instance to use a device in vfio, the user opens the
> > vfio group file and asks for the device within that group. That's a
> > fairly fundamental part of the mechanics to sidestep.
> >
> > However, is there an opportunity at a lower level? Systems without an
> > iommu typically have dma ops handled via a software iotlb (ie. bounce
> > buffers), but I think they simply don't have iommu ops registered.
> > Could a no-iommu, iommu subsystem provide enough dummy iommu ops to fake
> > out vfio? It would need to iterate the devices on the bus and come up
> > with dummy iommu groups and dummy versions of iommu_map and unmap. The
> > grouping is easy, one device per group, there's no isolation anyway.
> > The vfio type1 iommu backend will do pinning, which seems like an
> > improvement over the mlock that uio users probably try to do now.
>
> Right now, people use hugetlbfs maps, which both locks the memory and
> provides better performance.
>
> > I
> > guess the no-iommu map would error if the IOVA isn't simply the bus
> > address of the page mapped.
> >
> > Of course this is entirely unsafe and this no-iommu driver should taint
> > the kernel, but it at least standardizes on one userspace API and you're
> > already doing completely unsafe things with uio. vfio should be
> > enlightened at least to the point that it allows only privileged users
> > access to devices under such a (lack of) iommu.
>
> There is an additional complication. With an iommu, userspace programs
> the device with virtual addresses, but without it, they have to program
> physical addresses. So vfio would need to communicate this bit of
> information.
>
> We can go further and define a better translation API than the current
> one (reading /proc/pagemap). But it's going to be a bigger change to
> vfio than I thought at first.

It sounds like a separate vfio iommu backend from type1, one that just
pins the page and returns the bus address. The curse and benefit would
be that existing type1 users wouldn't "just work" in an insecure mode,
the DMA mapping code would need to be aware of the difference. Still, I
do really prefer to keep vfio as only exposing a secure, iommu protected
device to the user because surely someone will try and users would
expect that removing iommu restrictions from vfio means they can do
device assignment to VMs w/o an iommu.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/