Re: [PATCH] iommu: Split iommu_unmaps

From: David Woodhouse
Date: Wed Nov 20 2013 - 09:29:19 EST


On Mon, 2013-11-11 at 16:09 -0700, Alex Williamson wrote:
> On Thu, 2013-11-07 at 16:37 +0000, David Woodhouse wrote:
> > On Fri, 2013-05-24 at 11:14 -0600, Alex Williamson wrote:
> > > iommu_map splits requests into pages that the iommu driver reports
> > > that it can handle. The iommu_unmap path does not do the same. This
> > > can cause problems not only from callers that might expect the same
> > > behavior as the map path, but even from the failure path of iommu_map,
> > > should it fail at a point where it has mapped and needs to unwind a
> > > set of pages that the iommu driver cannot handle directly. amd_iommu,
> > > for example, will BUG_ON if asked to unmap a non power of 2 size.
> > >
> > > Fix this by extracting and generalizing the sizing code from the
> > > iommu_map path and use it for both map and unmap.
> > >
> > > Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
> >
> > Ick, this is horrid and looks like it will introduce a massive
> > performance hit.
>
> For x86 there are essentially two users of iommu_unmap(), KVM and VFIO.
> Both of them try to unmap an individual page and look at the result to
> see how much was actually unmapped. Everything else appears to be error
> paths. So where exactly is this massive performance hit?

It's there, in the code that you describe. This patch is making that
bogus behaviour even more firmly entrenched, and harder to fix.

There are out-of-tree users of this IOMMU API too. And while it sucks
that they are out-of-tree, they are working on fixing that. And I've
been talking to them about performance issues they already see on the
map side.

> > Surely the answer is to fix the AMD driver so that it will just get on
> > with it and unmap the {address, range} that it's asked to map?
>
> The IOMMU API allows iommu drivers to expose the page sizes they
> support. Mappings are done using these sizes so it only seems fair that
> unmappings should as well. At least that's what amd_iommu was
> expecting.

This is silly.

The generic code has (almost) no business caring about the page sizes
that the IOMMU driver will support. It should care about them *only* as
an optimisation â "hey, if you manage to give me 2MiB pages I can work
faster then". But it should *only* be an optimisation. Fundamentally,
the map and unmap routines should just do as they're bloody told,
without expecting their caller to break down the calls into individual
pages.

> That data is for dma_ops interfaces, not IOMMU API. How is changing
> iommu_unmap() in this way undoing any of your previous work?

That data is for the core map/unmap functions, which are accessed
through both APIs. While iommu_map() has the problem as you described,
iommu_unmap() didn't, and surely it would have been seeing the same
improvements... until this patch?

> > If the AMD driver really can't handle more than one page at a time, let
> > it loop for *itself* over the pages.
>
> Sure, but that's a change to the API where I think this fix was
> correcting a bug in the implementation of the API. Are there users of
> iommu_unmap() that I don't know about? Given the in-tree users, there's
> not really a compelling argument to optimize. Thanks,

It's a fix to a stupid API, yes. The current API has us manually marking
the Intel IOMMU as supporting *all* sizes of pages, just so that this
stupid "one page at a time" nonsense doesn't bite so hard... which
should have raised alarm bells at the time we did it, really.

--
dwmw2

Attachment: smime.p7s
Description: S/MIME cryptographic signature