Re: [PATCH v4 0/3] PCIe Host request to reserve IOVA

From: Lorenzo Pieralisi
Date: Wed May 01 2019 - 09:54:45 EST


On Wed, May 01, 2019 at 02:20:56PM +0100, Robin Murphy wrote:
> On 2019-05-01 1:55 pm, Bjorn Helgaas wrote:
> > On Wed, May 01, 2019 at 12:30:38PM +0100, Lorenzo Pieralisi wrote:
> > > On Fri, Apr 12, 2019 at 08:43:32AM +0530, Srinath Mannam wrote:
> > > > Few SOCs have limitation that their PCIe host can't allow few inbound
> > > > address ranges. Allowed inbound address ranges are listed in dma-ranges
> > > > DT property and this address ranges are required to do IOVA mapping.
> > > > Remaining address ranges have to be reserved in IOVA mapping.
> > > >
> > > > PCIe Host driver of those SOCs has to list resource entries of allowed
> > > > address ranges given in dma-ranges DT property in sorted order. This
> > > > sorted list of resources will be processed and reserve IOVA address for
> > > > inaccessible address holes while initializing IOMMU domain.
> > > >
> > > > This patch set is based on Linux-5.0-rc2.
> > > >
> > > > Changes from v3:
> > > > - Addressed Robin Murphy review comments.
> > > > - pcie-iproc: parse dma-ranges and make sorted resource list.
> > > > - dma-iommu: process list and reserve gaps between entries
> > > >
> > > > Changes from v2:
> > > > - Patch set rebased to Linux-5.0-rc2
> > > >
> > > > Changes from v1:
> > > > - Addressed Oza review comments.
> > > >
> > > > Srinath Mannam (3):
> > > > PCI: Add dma_ranges window list
> > > > iommu/dma: Reserve IOVA for PCIe inaccessible DMA address
> > > > PCI: iproc: Add sorted dma ranges resource entries to host bridge
> > > >
> > > > drivers/iommu/dma-iommu.c | 19 ++++++++++++++++
> > > > drivers/pci/controller/pcie-iproc.c | 44 ++++++++++++++++++++++++++++++++++++-
> > > > drivers/pci/probe.c | 3 +++
> > > > include/linux/pci.h | 1 +
> > > > 4 files changed, 66 insertions(+), 1 deletion(-)
> > >
> > > Bjorn, Joerg,
> > >
> > > this series should not affect anything in the mainline other than its
> > > consumer (ie patch 3); if that's the case should we consider it for v5.2
> > > and if yes how are we going to merge it ?
> >
> > I acked the first one
> >
> > Robin reviewed the second
> > (https://lore.kernel.org/lkml/e6c812d6-0cad-4cfd-defd-d7ec427a6538@xxxxxxx)
> > (though I do agree with his comment about DMA_BIT_MASK()), Joerg was OK
> > with it if Robin was
> > (https://lore.kernel.org/lkml/20190423145721.GH29810@xxxxxxxxxx).
> >
> > Eric reviewed the third (and pointed out a typo).
> >
> > My Kconfiggery never got fully answered -- it looks to me as though it's
> > possible to build pcie-iproc without the DMA hole support, and I thought
> > the whole point of this series was to deal with those holes
> > (https://lore.kernel.org/lkml/20190418234241.GF126710@xxxxxxxxxx). I would
> > have expected something like making pcie-iproc depend on IOMMU_SUPPORT.
> > But Srinath didn't respond to that, so maybe it's not an issue and it
> > should only affect pcie-iproc anyway.
>
> Hmm, I'm sure I had at least half-written a reply on that point, but I
> can't seem to find it now... anyway, the gist is that these inbound
> windows are generally set up to cover the physical address ranges of DRAM
> and anything else that devices might need to DMA to. Thus if you're not
> using an IOMMU, the fact that devices can't access the gaps in between
> doesn't matter because there won't be anything there anyway; it only
> needs mitigating if you do use an IOMMU and start giving arbitrary
> non-physical addresses to the endpoint.

So basically there is no strict IOMMU_SUPPORT dependency.

> > So bottom line, I'm fine with merging it for v5.2. Do you want to merge
> > it, Lorenzo, or ...?
>
> This doesn't look like it will conflict with the other DMA ops and MSI
> mapping changes currently in-flight for iommu-dma, so I have no
> objection to it going through the PCI tree for 5.2.

I will update the DMA_BIT_MASK() according to your review and fix the
typo Eric pointed out and push out a branch - we shall see if we can
include it for v5.2.

Thanks,
Lorenzo