Re: Can I/OAT DMA engineer access PCI MMIO space

From: Dan Williams
Date: Thu May 05 2011 - 11:11:19 EST


[ adding Dave ]

On 5/5/2011 1:45 AM, ååæ wrote:
Thanks.
I directly read pci bar address and program it into descriptors, ioatdma
works.
Some problem is, when PCI transfer failed (Using a NTB connect to
another system, and the system power down),
ioatdma will cause kernel oops.

BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365

It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
can't recover from this situation.

Ah ok, this is expected with the current upstream ioatdma driver. The driver assumes that all transfers are mem-to-mem (ASYNC_TX_DMA or NET_DMA) and that a destination address error is a fatal error (similar to a kernel page fault).

With NTB, where failures are expected, the driver would need to be modified to expect the error, recover from it, and report it to the application.

What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA
drivers?

Yes, DMA_SLAVE is the generic framework to associate a dma offload device with an mmio peripheral.

--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/