Re: Can I/OAT DMA engineer access PCI MMIO space

From: ååæ
Date: Thu May 05 2011 - 04:42:41 EST


ä 2011å05æ03æ 23:58, Dan Williams åé:

Do you mean that if I have mapped the mmio, I can' use I/OAT dma
transfer to this region any more?
I can use memcpy to copy data, but it consumes lots of cpu as PCI access
is too slow.
If I can use i/oat dma and asyc_tx api to do the job, the performance
should be imporved.
Thanks


The async_tx api only supports memory-to-memory transfers. To write to mmio space with ioatdma you would need a custom method, like the dma-slave support in other drivers, to program the descriptors with the physical mmio bus address.

--
Dan
Thanks.
I directly read pci bar address and program it into descriptors, ioatdma works.
Some problem is, when PCI transfer failed (Using a NTB connect to another system, and the system power down),
ioatdma will cause kernel oops.

BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365

It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers can't recover from this situation.
What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA drivers?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/