Re: [PATCH] dma-direct: avoid redundant memory sync for swiotlb

From: Robin Murphy
Date: Tue Apr 12 2022 - 09:33:24 EST


On 12/04/2022 12:38 pm, Chao Gao wrote:
When we looked into FIO performance with swiotlb enabled in VM, we found
swiotlb_bounce() is always called one more time than expected for each DMA
read request.

It turns out that the bounce buffer is copied to original DMA buffer twice
after the completion of a DMA request (one is done by in
dma_direct_sync_single_for_cpu(), the other by swiotlb_tbl_unmap_single()).
But the content in bounce buffer actually doesn't change between the two
rounds of copy. So, one round of copy is redundant.

Pass DMA_ATTR_SKIP_CPU_SYNC flag to swiotlb_tbl_unmap_single() to
skip the memory copy in it.

It's still a little suboptimal and non-obvious to call into SWIOTLB twice though - even better might be for SWIOTLB to call arch_sync_dma_for_cpu() at the appropriate place internally, then put the dma_direct_sync in an else path here. I'm really not sure why we have the current disparity between map and unmap in this regard... :/

Robin.

This fix increases FIO 64KB sequential read throughput in a guest with
swiotlb=force by 5.6%.

Reported-by: Wang Zhaoyang1 <zhaoyang1.wang@xxxxxxxxx>
Reported-by: Gao Liang <liang.gao@xxxxxxxxx>
Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
---
kernel/dma/direct.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 4632b0f4f72e..8a6cd53dbe8c 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
dma_direct_sync_single_for_cpu(dev, addr, size, dir);
if (unlikely(is_swiotlb_buffer(dev, phys)))
- swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
+ swiotlb_tbl_unmap_single(dev, phys, size, dir,
+ attrs | DMA_ATTR_SKIP_CPU_SYNC);
}
#endif /* _KERNEL_DMA_DIRECT_H */