Re: [PATCH] net: mctp: Add MCTP PCIe VDM transport driver

From: Jeremy Kerr
Date: Thu Jul 17 2025 - 03:32:46 EST


Hi YH,

> From my perspective, the other MCTP transport drivers do make use of
> abstraction layers that already exist in the kernel tree. For example,
> mctp-i3c uses i3c_device_do_priv_xfers(), which ultimately invokes
> operations registered by the underlying I3C driver. This is
> effectively an abstraction layer handling the hardware-specific
> details of TX packet transmission.
>
> In our case, there is no standard interface—like those for
> I2C/I3C—that serves PCIe VDM.

But that's not what you're proposing here - your abstraction layer
serves one type of PCIe VDM messaging (MCTP), for only one PCIe VDM MCTP
driver.

If you were proposing adding a *generic* PCIe VDM interface, that is
suitable for all messaging types (not just MCTP), and all PCIe VDM
hardware (not just ASPEED's) that would make more sense. But I think
that would be a much larger task than what you're intending here.

Start small. If we have other use-cases for an abstraction layer, we can
introduce it at that point - where we have real-world design inputs for
it.

Regardless, we have worked out that there is nothing to actually abstract
*anyway*.

> > The direct approach would definitely be preferable, if possible.
> >
> Got it. Then we'll remove the kernel thread and do TX directly.

Super!

> > Excellent question! I suspect we would want a four-byte representation,
> > being:
> >
> > [0]: routing type (bits 0:2, others reserved)
> > [1]: segment (or 0 for non-flit mode)
> > [2]: bus
> > [3]: device / function
> >
> > which assumes there is some value in combining formats between flit- and
> > non-flit modes. I am happy to adjust if there are better ideas.
> >
> This looks good to me—thanks for sharing!

No problem! We'll still want a bit of wider consensus on this, because
we cannot change it once upstreamed.

Cheers,


Jeremy