Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout workaround to host drivers.

From: Bo Gan
Date: Wed Mar 20 2024 - 03:12:39 EST


On 3/13/24 7:51 PM, Keith Busch wrote:
On Thu, Mar 14, 2024 at 02:18:38AM +0000, Kevin Xie wrote:
Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
workaround to host drivers.

On Mon, Mar 04, 2024 at 10:08:06AM -0800, Palmer Dabbelt wrote:
On Thu, 29 Feb 2024 07:08:43 PST (-0800), lpieralisi@xxxxxxxxxx wrote:
On Tue, Feb 27, 2024 at 06:35:21PM +0800, Minda Chen wrote:
From: Kevin Xie <kevin.xie@xxxxxxxxxxxxxxxx>

As the Starfive JH7110 hardware can't keep two inbound post write
in order all the time, such as MSI messages and NVMe completions.
If the NVMe completion update later than the MSI, an NVMe IRQ handle
will miss.

Please explain what the problem is and what "NVMe completions" means
given that you are talking about posted writes.

Echoing Keith here. Why are you treating NVMe completions + MSI as a special case?
What's special about this combination other than two posted writes? I own JH7110
visionfive 2 boards myself, and if I'm not mistaken, there are two identical PCIe
controllers in JH7110. The first one connects the onboard USB controller of vf2,
which also enables MSI interrupts. How come this exact problem not affecting the
USB controller? The commit message from Minda strongly suggests it does, and also
for R8169 NIC. Thus, why would you suggest the problem is confined to NVMe?

Bo


Sorry, we made a casual conclusion here.
Not any two of inbound post requests can`t be kept in order in JH7110 SoC,
the only one case we found is NVMe completions with MSI interrupts.
To be more precise, they are the pending status in nvme_completion struct and
nvme_irq handler in nvme/host/pci.c.

We have shown the original workaround patch before:
https://lore.kernel.org/lkml/CAJM55Z9HtBSyCq7rDEDFdw644pOWCKJfPqhmi3SD1x6p3g2SLQ@xxxxxxxxxxxxxx/
We put it in our github branch and works fine for a long time.
Looking forward to better advices from someone familiar with NVMe drivers.

So this platform treats strictly ordered writes the same as if relaxed
ordering was enabled? I am not sure if we could reasonably work around
such behavior. An arbitrary delay is likely too long for most cases, and
too short for the worst case.

I suppose we could quirk a non-posted transaction in the interrupt
handler to force flush pending memory updates, but that will noticeably
harm your nvme performance. Maybe if you constrain such behavior to the
spurious IRQ_NONE condition, then it might be okay? I don't know.


Also copied Keith's latest reply below, and I also have the same doubt.

Hm, that may not be good enough: if nvme completions can be reordered
with their msi's, then I assume data may reorder with their completion.
Your application will inevitably see stale and corrupted data, so it
sounds like you need some kind of barrier per completion. Ouch!