Re: [PATCH 7/8] docs: dma-api: update streaming DMA API physical address constraints

From: Robin Murphy
Date: Thu Jun 26 2025 - 05:58:15 EST


On 2025-06-26 6:06 am, Petr Tesarik wrote:
On Thu, 26 Jun 2025 08:49:17 +0700
Bagas Sanjaya <bagasdotme@xxxxxxxxx> wrote:

On Tue, Jun 24, 2025 at 03:39:22PM +0200, Petr Tesarik wrote:
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index cd432996949c..65132ec88104 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -210,18 +210,12 @@ DMA_BIDIRECTIONAL direction isn't known
this API should be obtained from sources which guarantee it to be
physically contiguous (like kmalloc).
- Further, the DMA address of the memory must be within the dma_mask of
- the device. To ensure that the memory allocated by kmalloc is within
- the dma_mask, the driver may specify various platform-dependent flags
- to restrict the DMA address range of the allocation (e.g., on x86,
- GFP_DMA guarantees to be within the first 16MB of available DMA
- addresses, as required by ISA devices).
-
- Note also that the above constraints on physical contiguity and
- dma_mask may not apply if the platform has an IOMMU (a device which
- maps an I/O DMA address to a physical memory address). However, to be
- portable, device driver writers may *not* assume that such an IOMMU
- exists.
+ Mapping may also fail if the memory is not within the DMA mask of the
+ device. However, this constraint does not apply if the platform has
+ an IOMMU (a device which maps an I/O DMA address to a physical memory
+ address), or the kernel is configured with SWIOTLB (bounce buffers).
+ It is reasonable to assume that at least one of these mechanisms
+ allows streaming DMA to any physical address.

Now I realize this last sentence may be contentious...

The whole paragraph is wrong as written, not least because it is conflating two separate things: "any physical address" is objectively untrue, since SWIOTLB can only bounce from buffers within by the kernel's linear/direct map, i.e. not highmem, not random memory carveouts, and and definitely not PAs which are not RAM at all. Secondly, even if the source buffer *is* bounceable/mappable, there is still no guarantee at all that it can actually be made to appear at a DMA address within an arbitrary DMA mask. We aim for a general expectation that 32-bit DMA masks should be well-supported (but still not 100% guaranteed), but anything smaller can absolutely still have a high chance of failing, e.g. due to the SWIOTLB buffer being allocated too high or limited IOVA space.

@Marek, @Robin Do you agree that device drivers should not be concerned
about the physical address of a buffer passed to the streaming DMA API?

I mean, are there any real-world systems with:
* some RAM that is not DMA-addressable,
* no IOMMU,
* CONFIG_SWIOTLB is not set?

Yes, almost certainly, because "DMA-addressable" depends on individual devices. You can't stop a user from sticking, say, a Broadcom 43xx WiFi card into a PCI slot on an i.MX6 board with 2GB of RAM that *starts* just above its 31-bit DMA capability. People are still using AMD Seattle machines, where even though arm64 does have SWIOTLB it's essentially useless since RAM starts up around 40 bits IIRC (and although they do also have SMMUs for PCI, older firmware didn't advertise them).

FWIW if _I_ received a bug report that a device driver fails to submit
I/O on such a system, I would politely explain the reporter that their
kernel is misconfigured, and they should enable CONFIG_SWIOTLB.

It's not really that simple. SWIOTLB, ZONE_DMA, etc. require platform support, which end users can't just turn on if it's not there to begin with.

Thanks,
Robin.