Re: [PATCH v4 4/6] vfio/type1: check dma map request is within a valid iova range

From: Robin Murphy
Date: Wed Mar 14 2018 - 14:12:27 EST


Hi Shameer,

On 05/03/18 11:44, Shameerali Kolothum Thodi wrote:
Hi Robin,

-----Original Message-----
From: Robin Murphy [mailto:robin.murphy@xxxxxxx]
Sent: Friday, March 02, 2018 5:17 PM
To: Alex Williamson <alex.williamson@xxxxxxxxxx>; Shameerali Kolothum
Thodi <shameerali.kolothum.thodi@xxxxxxxxxx>
Cc: Auger Eric <eric.auger@xxxxxxxxxx>; pmorel@xxxxxxxxxxxxxxxxxx;
kvm@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Linuxarm
<linuxarm@xxxxxxxxxx>; John Garry <john.garry@xxxxxxxxxx>; xuwei (O)
<xuwei5@xxxxxxxxxx>
Subject: Re: [PATCH v4 4/6] vfio/type1: check dma map request is within a valid
iova range

On 02/03/18 16:04, Alex Williamson wrote:
[...]
I still think you're overstretching the IOMMU reserved region interface
by trying to report possible ACS conflicts. Are we going to update the
reserved list on device hotplug? Are we going to update the list when
MMIO is enabled or disabled for each device? What if the BARs are
reprogrammed or bridge apertures changed? IMO, the IOMMU reserved
list
should account for specific IOVA exclusions that apply to transactions
that actually reach the IOMMU, not aliasing that might occur in the
downstream topology. Additionally, the IOMMU group composition must
be
such that these sorts of aliasing issues can only occur within an IOMMU
group. If a transaction can be affected by the MMIO programming of
another group, then the groups are drawn incorrectly for the isolation
capabilities of the hardware. Thanks,

Agree that this is not a perfect thing to do covering all scenarios. As Robin
pointed out, the current code is playing safe by reserving the full windows.

My suggestion will be to limit this reservation to non-ACS cases only. This
will
make sure that ACS ARM hardware is not broken by this series and nos-ACS
ones has a chance to run once Qemu is updated to take care of the reserved
regions (at least in some user scenarios).

If you all are fine with this, I can include the ACS check I mentioned earlier
in
iommu_dma_get_resv_regions() and sent out the revised series.

Please let me know your thoughts.

IMO, the IOMMU should be concerned with ACS as far as isolation is
concerned and reporting reserved ranges that are imposed at the IOMMU
and leave any aliasing or routing issues in the downstream topology to
other layers, or perhaps to the user. Unfortunately, enforcing the
iova list in vfio is gated by some movement here since we can't break
existing users. Thanks,

FWIW, given the discussion we've had here I wouldn't object to pushing
the PCI window reservation back into the DMA-specific path, such that it
doesn't get exposed via the general IOMMU API interface. We *do* want to
do it there where we are in total control of our own address space and
it just avoids a whole class of problems (even with ACS I'm not sure the
root complex can be guaranteed to send a "bad" IOVA out to the SMMU
instead of just terminating it for looking like a peer-to-peer attempt).

I do agree that it's not scalable for the IOMMU layer to attempt to
detect and describe upstream PCI limitations to userspace by itself -
they are "reserved regions" rather than "may or may not work regions"
after all. If there is a genuine integration issue between an IOMMU and
an upstream PCI RC which restricts usable addresses on a given system,
that probably needs to be explicitly communicated from firmware to the
IOMMU driver anyway, at which point that driver can report the relevant
region(s) directly from its own callback.

I suppose there's an in-between option of keeping generic window
reservations but giving them a new "only reserve this if you're being
super-cautious or don't know better" region type which we hide from
userspace and ignore in VFIO, but maybe that leaves the lines a but too
blurred still.

Thanks for your reply and details. I have made an attempt to revert the PCI
window reservation back into the DMA path. Could you please take a look
at the below changes and let me know.
(This is on top of HW MSI reserve patches which is now part of linux-next)

Thanks for putting this together - seeing what's left after the patch makes me feel half-tempted to effectively revert 273df9635385 altogether and just call iort_iommu_msi_get_resv_regions() directly from SMMUv3, but there are other things like dma-ranges constraints which we may also want to handle generically somewhere so it's probably worth keeping iommu_dma_get_resv_regions() around as a common hook for now. In future it might ultimately make sense to move it out to something like iommu_get_fw_resv_regions(), but that can be a discussion for another day; right now it seems this ought to be enough to resolve everyone's immediate concerns.

Cheers,
Robin.


Thanks,
Shameer

-->8--
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f05f3cf..ddcbbdb 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -167,40 +167,16 @@ EXPORT_SYMBOL(iommu_put_dma_cookie);
* @list: Reserved region list from iommu_get_resv_regions()
*
* IOMMU drivers can use this to implement their .get_resv_regions callback
- * for general non-IOMMU-specific reservations. Currently, this covers host
- * bridge windows for PCI devices and GICv3 ITS region reservation on ACPI
- * based ARM platforms that may require HW MSI reservation.
+ * for general non-IOMMU-specific reservations. Currently, this covers GICv3
+ * ITS region reservation on ACPI based ARM platforms that may require HW MSI
+ * reservation.
*/
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
{
- struct pci_host_bridge *bridge;
- struct resource_entry *window;
-
- if (!is_of_node(dev->iommu_fwspec->iommu_fwnode) &&
- iort_iommu_msi_get_resv_regions(dev, list) < 0)
- return;
-
- if (!dev_is_pci(dev))
- return;
-
- bridge = pci_find_host_bridge(to_pci_dev(dev)->bus);
- resource_list_for_each_entry(window, &bridge->windows) {
- struct iommu_resv_region *region;
- phys_addr_t start;
- size_t length;
-
- if (resource_type(window->res) != IORESOURCE_MEM)
- continue;
- start = window->res->start - window->offset;
- length = window->res->end - window->res->start + 1;
- region = iommu_alloc_resv_region(start, length, 0,
- IOMMU_RESV_RESERVED);
- if (!region)
- return;
+ if (!is_of_node(dev->iommu_fwspec->iommu_fwnode))
+ iort_iommu_msi_get_resv_regions(dev, list);
- list_add_tail(&region->list, list);
- }
}
EXPORT_SYMBOL(iommu_dma_get_resv_regions);
@@ -229,6 +205,23 @@ static int cookie_init_hw_msi_region(struct iommu_dma_cookie *cookie,
return 0;
}
+static void iova_reserve_pci_windows(struct pci_dev *dev,
+ struct iova_domain *iovad)
+{
+ struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
+ struct resource_entry *window;
+ unsigned long lo, hi;
+
+ resource_list_for_each_entry(window, &bridge->windows) {
+ if (resource_type(window->res) != IORESOURCE_MEM)
+ continue;
+
+ lo = iova_pfn(iovad, window->res->start - window->offset);
+ hi = iova_pfn(iovad, window->res->end - window->offset);
+ reserve_iova(iovad, lo, hi);
+ }
+}
+
static int iova_reserve_iommu_regions(struct device *dev,
struct iommu_domain *domain)
{
@@ -238,6 +231,9 @@ static int iova_reserve_iommu_regions(struct device *dev,
LIST_HEAD(resv_regions);
int ret = 0;
+ if (dev_is_pci(dev))
+ iova_reserve_pci_windows(to_pci_dev(dev), iovad);
+
iommu_get_resv_regions(dev, &resv_regions);
list_for_each_entry(region, &resv_regions, list) {
unsigned long lo, hi;