Re: [PATCH 1/1] virtio_ring: fix return code on DMA mapping fails

From: Halil Pasic
Date: Fri Nov 29 2019 - 09:09:57 EST


On Tue, 26 Nov 2019 19:45:27 +0100
Christoph Hellwig <hch@xxxxxx> wrote:

> On Sat, Nov 23, 2019 at 09:39:08AM -0600, Tom Lendacky wrote:
> > Ideally, having a pool of shared pages for DMA, outside of standard
> > SWIOTLB, might be a good thing. On x86, SWIOTLB really seems geared
> > towards devices that don't support 64-bit DMA. If a device supports 64-bit
> > DMA then it can use shared pages that reside anywhere to perform the DMA
> > and bounce buffering. I wonder if the SWIOTLB support can be enhanced to
> > support something like this, using today's low SWIOTLB buffers if the DMA
> > mask necessitates it, otherwise using a dynamically sized pool of shared
> > pages that can live anywhere.
>
> I think that can be done relatively easily. I've actually been thinking
> of multiple pool support for a whÑle to replace the bounce buffering
> in the block layer for ISA devices (24-bit addressing).
>
> I've also been looking into a dma_alloc_pages interface to help people
> just allocate pages that are always dma addressable, but don't need
> a coherent allocation. My last version I shared is here:
>
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma_alloc_pages
>
> But it turns out this still doesn't work with SEV as we'll always
> bounce. And I've been kinda lost on figuring out a way how to
> allocate unencrypted pages that we we can feed into the normal
> dma_map_page & co interfaces due to the magic encryption bit in
> the address. I guess we could have a fallback path in the mapping
> path and just unconditionally clear that bit in the dma_to_phys
> path.

Thanks Christoph! Thanks Tom! I will do some looking and thinking and
report back.

Regards,
Halil