Re: [PATCH 01/12] iommu/vt-d: Use iommu_get_domain_for_dev() in debugfs

From: Robin Murphy
Date: Tue May 31 2022 - 14:07:50 EST


On 2022-05-31 17:21, Jason Gunthorpe wrote:
On Tue, May 31, 2022 at 05:01:46PM +0100, Robin Murphy wrote:

The DMA API doesn't need locking, partly since it can trust itself not to do
stupid things, and mostly because it's DMA API performance that's
fundamentally incompatible with serialisation anyway. Why do you think we
have a complicated per-CPU IOVA caching mechanism, if not to support big
multi-queue devices with multiple CPU threads mapping/unmapping in different
parts of the same DMA domain concurrently?

Well, per-CPU is a form of locking.

Right, but that only applies for alloc_iova_fast() itself - once the CPUs have each got their distinct IOVA region, they can then pile into iommu_map() completely unhindered (and the inverse for the unmap path).

So what are the actual locking rules here? We can call map/unmap
concurrently but not if ... ?

IOVA overlaps?

Well, I think the de-facto rule is that you technically *can* make overlapping requests, but one or both may fail, and the final outcome in terms of what ends up mapped in the domain is undefined (especially if they both succeed). The only real benefit of enforcing serialisation would be better failure behaviour in such cases, but it remains fundamentally nonsensical for callers to make contradictory requests anyway, whether concurrently or sequentially, so there doesn't seem much point in spending effort on improving support for nonsense.

And we expect the iommu driver to be unable to free page table levels
that have IOVA boundaries in them?

I'm not entirely sure what you mean there, but in general an unmap request is expected to match some previous map request - there isn't a defined API-level behaviour for partial unmaps. They might either unmap the entire region originally mapped, or just the requested part, or might fail entirely (IIRC there was some nasty code in VFIO for detecting a particular behaviour). Similarly for unmapping anything that's already not mapped, some drivers treat that as a no-op, others as an error. But again, this is even further unrelated to concurrency.

The simpler drivers already serialise on a per-domain lock internally, while
the more performance-focused ones implement lock-free atomic pagetable
management in a similar style to CPU arch code; either way it should work
fine as-is.

The mm has page table locks at every level and generally expects them
to be held for a lot of manipulations. There are some lockless cases,
but it is not as aggressive as this sounds.

Oh, I've spent the last couple of weeks hacking up horrible things manipulating entries in init_mm, and never realised that that was actually the special case. Oh well, live and learn.

Robin.