Re: Device driver location for the PCIe root port's DMA engine

From: Vidya Sagar
Date: Tue Apr 13 2021 - 14:44:55 EST




On 4/13/2021 11:43 PM, Rob Herring wrote:
External email: Use caution opening links or attachments


On Mon, Apr 12, 2021 at 12:01 PM Vidya Sagar <vidyas@xxxxxxxxxx> wrote:

Hi
I'm starting this mail to seek advice on the best approach to be taken
to add support for the driver of the PCIe root port's DMA engine.
To give some background, Tegra194's PCIe IPs are dual-mode PCIe IPs i.e.
they work either in the root port mode or in the endpoint mode based on
the boot time configuration.
Since the PCIe hardware IP as such is the same for both (RP and EP)
modes, the DMA engine sub-system of the PCIe IP is also made available
to both modes of operation.
Typically, the DMA engine is seen only in the endpoint mode, and that
DMA engine’s configuration registers are made available to the host
through one of its BARs.
In the situation that we have here, where there is a DMA engine present
as part of the root port, the DMA engine isn’t a typical general-purpose
DMA engine in the sense that it can’t have both source and destination
addresses targeting external memory addresses.
RP’s DMA engine, while doing a write operation,
would always fetch data (i.e. source) from local memory and write it to
the remote memory over PCIe link (i.e. destination would be the BAR of
an endpoint)
whereas while doing a read operation,
would always fetch/read data (i.e. source) from a remote memory over the
PCIe link and write it to the local memory.

I see that there are at least two ways we can have a driver for this DMA
engine.
a) DMA engine driver as one of the port service drivers
Since the DMA engine is a part of the root port hardware itself
(although it is not part of the standard capabilities of the root port),
it is one of the options to have the driver for the DMA engine go as one
of the port service drivers (along with AER, PME, hot-plug, etc...).
Based on Vendor-ID and Device-ID matching runtime, either it gets
binded/enabled (like in the case of Tegra194) or it doesn't.
b) DMA engine driver as a platform driver
The DMA engine hardware can be described as a sub-node under the PCIe
controller's node in the device tree and a separate platform driver can
be written to work with it.

DT expects PCI bridge child nodes to be a PCI device. We've already
broken that with the interrupt controller child nodes, but I don't
really want to add more.
Understood. Is there any other way of specifying the DMA functionality other than as a child node so that it is inline with the DT framework's expectations?


I’m inclined to have the DMA engine driver as a port service driver as
it makes it cleaner and also in line with the design philosophy (the way
I understood it) of the port service drivers.
Please let me know your thoughts on this.

What is the actual usecase and benefit for using the DMA engine with
the RP? The only one I've come up with is the hardware designers think
having DMA is better than not having DMA so they include that option
on the DWC controller.
In Tegra194-to-Tegra194 configuration (with one Tegra194 as RP and the other as EP) better performance is expected when DMA engines on both sides are used for pushing(writing) the data across instead of using only the EP's DMA engine for both push(write) and pull(read).


Rob