Re: [PATCH 7/8] docs: dma-api: update streaming DMA API physical address constraints
From: Petr Tesarik
Date: Fri Jun 27 2025 - 09:02:31 EST
On Fri, 27 Jun 2025 05:55:09 -0700
Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> On Thu, Jun 26, 2025 at 05:45:18PM +0100, Robin Murphy wrote:
> > Indeed that might actually end up pushing things in the opposite direction,
> > at least in some cases. Right now, a driver with, say, a 40-bit DMA mask is
> > usually better off not special-casing DMA buffers, and just making plain
> > GFP_KERNEL allocations for everything (on the assumption that 64-bit systems
> > with masses of memory *should* have SWIOTLB to cover things in the worst
> > case), vs. artificially constraining its DMA buffers to GFP_DMA32 and having
> > to deal with allocation failure more often. However with a more precise and
> > flexible allocator, there's then a much stronger incentive for such drivers
> > to explicitly mark *every* allocation that may be used for DMA, in order to
> > get the optimal behaviour.
>
> It really should be using dma_alloc_pages to ensure it gets addressable
> memory for these cases. For sub-page allocations it could use dmapool,
> but that's a little annoying because it does coherent allocations which
> 95% of the users don't actually need.
Wow, thank you for this insight! There's one item on my TODO list:
convert SLAB_CACHE_DMA caches to dmapool. But now I see it would
introduce a regression (accessing DMA-coherent pages may be much
slower). I could implement a variant of dmapool which allocates normal
pages from a given physical address range, and it seems it would be
actually useful.
Petr T