Re: hfi1 use of PCI internals

From: Bjorn Helgaas
Date: Fri Jun 17 2016 - 19:04:48 EST


On Fri, Jun 17, 2016 at 06:05:43PM -0400, Ashutosh Dixit wrote:
> On Thu, Jun 16 2016 at 04:08:17 PM, Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote:
> >
> > That's a good start, but leads to more questions. For example, it
> > doesn't answer the obvious question of why the driver needs to
> > enable/disable ASPM from interrupt context.
>
> For power saving reasons we keep ASPM L1 enabled, but implement a
> heuristic to "quickly" disable ASPM L1 when we notice PCIe traffic (as
> measured by the interrupt rate) starting up. If interrupt activity
> ceases ASPM L1 is re-enabled.
>
> > Disabling ASPM should only require writing the device's Link Control
> > register. The PCI core could probably provide an interface to do that
> > in interrupt context.
> >
> > Enabling ASPM is not latency-critical and could probably be done from
> > a work queue outside interrupt context, although conceptually there
> > shouldn't be much required here either, and possibly the PCI core
> > interface could be improved.
>
> That is true, to keep latencies low we need to disable ASPM from
> interrupt context, but re-enabling ASPM is not latency critical.

For endpoint devices, it should be theoretically possible to
enable/disable ASPM very quickly, by touching only that device. We
don't do that today because pcie/aspm.c does all sorts of buffoonery
and path walking. I think that could be simplified, assuming we think
this sort of intensive ASPM-management is desirable.

> > It's possible the latency problem could be handled by some sort of
> > quirk that overrides the acceptable latency.
>
> Correct, this is another issue that needs to be resolved.