Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

From: Tomi Valkeinen
Date: Wed Mar 12 2014 - 06:47:49 EST


On 12/03/14 12:25, Russell King - ARM Linux wrote:
> On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
>> In theory unidirectional links in DT are indeed enough. However, let's not
>> forget the following.
>>
>> - There's no such thing as single start points for graphs. Sure, in some
>> simple cases the graph will have a single start point, but that's not a
>> generic rule. For instance the camera graphs
>> http://ideasonboard.org/media/omap3isp.ps and
>> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two
>> starting points from a data flow point of view.
>
> I think we need to stop thinking of a graph linked in terms of data
> flow - that's really not useful.
>
> Consider a display subsystem. The CRTC is the primary interface for
> the CPU - this is the "most interesting" interface, it's the interface
> which provides access to the picture to be displayed for the CPU. Other
> interfaces are secondary to that purpose - reading the I2C DDC bus for
> the display information is all secondary to the primary purpose of
> displaying a picture.
>
> For a capture subsystem, the primary interface for the CPU is the frame
> grabber (whether it be an already encoded frame or not.) The sensor
> devices are all secondary to that.
>
> So, the primary software interface in each case is where the data for
> the primary purpose is transferred. This is the point at which these
> graphs should commence since this is where we would normally start
> enumeration of the secondary interfaces.
>
> V4L2 even provides interfaces for this: you open the capture device,
> which then allows you to enumerate the capture device's inputs, and
> this in turn allows you to enumerate their properties. You don't open
> a particular sensor and work back up the tree.

We do it the other way around in OMAP DSS. It's the displays the user is
interested in, so we enumerate the displays, and if the user wants to
enable a display, we then follow the links from the display towards the
SoC, configuring and enabling the components on the way.

I don't have a strong opinion on the direction, I think both have their
pros. In any case, that's more of a driver model thing, and I'm fine
with linking in the DT outwards from the SoC (presuming the
double-linking is not ok, which I still like best).

> I believe trying to do this according to the flow of data is just wrong.
> You should always describe things from the primary device for the CPU
> towards the peripheral devices and never the opposite direction.

In that case there's possibly the issue I mentioned in other email in
this thread: an encoder can be used in both a display and a capture
pipeline. Describing the links outwards from CPU means that sometimes
the encoder's input port is pointed at, and sometimes the encoder's
output port is pointed at.

That's possibly ok, but I think Grant was of the opinion that things
should be explicitly described in the binding documentation: either a
device's port must contain a 'remote-endpoint' property, or it must not,
but no "sometimes". But maybe I took his words too literally.

Then there's also the audio example Philipp mentioned, where there is no
clear "outward from Soc" direction for the link, as the link was
bi-directional and between two non-SoC devices.

Tomi


Attachment: signature.asc
Description: OpenPGP digital signature