Re: [RFC v2 3/5] PCIe, Add runtime PM support to PCIe port

From: Rafael J. Wysocki
Date: Mon May 07 2012 - 16:55:58 EST


On Saturday, May 05, 2012, huang ying wrote:
> On Sat, May 5, 2012 at 3:43 AM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
> > On Friday, May 04, 2012, Huang Ying wrote:
> >> From: Zheng Yan <zheng.z.yan@xxxxxxxxx>
> >>
> >> This patch adds runtime PM support to PCIe port. This is needed by
> >> PCIe D3cold support, where PCIe device in slot may be powered on/off
> >> by PCIe port.
> >>
> >> Because runtime suspend is broken for some chipset, a white list is
> >> used to enable runtime PM support for only chipset known to work.
> >>
> >> Signed-off-by: Zheng Yan <zheng.z.yan@xxxxxxxxx>
> >> Signed-off-by: Huang Ying <ying.huang@xxxxxxxxx>
> >> ---
> >> drivers/pci/pci.c | 9 +++++++++
> >> drivers/pci/pcie/portdrv_pci.c | 40 ++++++++++++++++++++++++++++++++++++++++
> >> 2 files changed, 49 insertions(+)
> >>
> >> --- a/drivers/pci/pci.c
> >> +++ b/drivers/pci/pci.c
> >> @@ -1476,6 +1476,15 @@ bool pci_check_pme_status(struct pci_dev
> >> */
> >> static int pci_pme_wakeup(struct pci_dev *dev, void *pme_poll_reset)
> >> {
> >> + struct pci_dev *bridge = dev->bus->self;
> >> +
> >> + /*
> >> + * If bridge is in low power state, the configuration space of
> >> + * subordinate devices may be not accessible
> >
> > Please also say in the comment _when_ this is possible. That's far from
> > obvious, because the runtime PM framework generally ensures that parents are
> > resumed before the children, so the comment should describe the particular
> > scenario leading to this situation.
>
> OK. I will add something like below into comments.
>
> This is possible when doing PME poll.

Well, that doesn't really explain much. :-)

I _think_ the situation is when a device causes WAKE# to be generated and
the platform receives a GPE as a result and we get an ACPI_NOTIFY_DEVICE_WAKE
notification for the device, which may be under a bridge (PCIe port really)
in D3_cold. Is that the case?

> >> + */
> >> + if (bridge && bridge->current_state != PCI_D0)
> >> + return 0;
> >> +
> >> if (pme_poll_reset && dev->pme_poll)
> >> dev->pme_poll = false;
> >>
> >> --- a/drivers/pci/pcie/portdrv_pci.c
> >> +++ b/drivers/pci/pcie/portdrv_pci.c
> >> @@ -11,6 +11,7 @@
> >> #include <linux/kernel.h>
> >> #include <linux/errno.h>
> >> #include <linux/pm.h>
> >> +#include <linux/pm_runtime.h>
> >> #include <linux/init.h>
> >> #include <linux/pcieport_if.h>
> >> #include <linux/aer.h>
> >> @@ -99,6 +100,27 @@ static int pcie_port_resume_noirq(struct
> >> return 0;
> >> }
> >>
> >> +#ifdef CONFIG_PM_RUNTIME
> >> +static int pcie_port_runtime_suspend(struct device *dev)
> >> +{
> >> + struct pci_dev *pdev = to_pci_dev(dev);
> >> +
> >
> > A comment explaining why this is needed here would be welcome.
>
> Sorry, do not catch your idea exactly. What is needed? Why do we
> need to add PCIe port runtime suspend support?

No, why we need to call pci_save_state() from here and pci_restore_state()
from the corresponding resume routine.

In theory that shouldn't be necessary, because the PCI bus type's
runtime suspend/resume routines do the save/restore of the PCI config space.

Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/