Re: [PATCH v4 1/6] vfio/pci: Mask INTx during runtime suspend

From: Alex Williamson
Date: Wed Jul 06 2022 - 11:48:48 EST


On Fri, 1 Jul 2022 16:38:09 +0530
Abhishek Sahu <abhsahu@xxxxxxxxxx> wrote:

> This patch adds INTx handling during runtime suspend/resume.
> All the suspend/resume related code for the user to put the device
> into the low power state will be added in subsequent patches.
>
> The INTx are shared among devices. Whenever any INTx interrupt comes

"The INTx lines may be shared..."

> for the VFIO devices, then vfio_intx_handler() will be called for each
> device. Inside vfio_intx_handler(), it calls pci_check_and_mask_intx()

"...device sharing the interrupt."

> and checks if the interrupt has been generated for the current device.
> Now, if the device is already in the D3cold state, then the config space
> can not be read. Attempt to read config space in D3cold state can
> cause system unresponsiveness in a few systems. To prevent this, mask
> INTx in runtime suspend callback and unmask the same in runtime resume
> callback. If INTx has been already masked, then no handling is needed
> in runtime suspend/resume callbacks. 'pm_intx_masked' tracks this, and
> vfio_pci_intx_mask() has been updated to return true if INTx has been
> masked inside this function.
>
> For the runtime suspend which is triggered for the no user of VFIO
> device, the is_intx() will return false and these callbacks won't do
> anything.
>
> The MSI/MSI-X are not shared so similar handling should not be
> needed for MSI/MSI-X. vfio_msihandler() triggers eventfd_signal()
> without doing any device-specific config access. When the user performs
> any config access or IOCTL after receiving the eventfd notification,
> then the device will be moved to the D0 state first before
> servicing any request.
>
> Signed-off-by: Abhishek Sahu <abhsahu@xxxxxxxxxx>
> ---
> drivers/vfio/pci/vfio_pci_core.c | 37 +++++++++++++++++++++++++++----
> drivers/vfio/pci/vfio_pci_intrs.c | 6 ++++-
> include/linux/vfio_pci_core.h | 3 ++-
> 3 files changed, 40 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index a0d69ddaf90d..5948d930449b 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -259,16 +259,45 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat
> return ret;
> }
>
> +#ifdef CONFIG_PM
> +static int vfio_pci_core_runtime_suspend(struct device *dev)
> +{
> + struct vfio_pci_core_device *vdev = dev_get_drvdata(dev);
> +
> + /*
> + * If INTx is enabled, then mask INTx before going into the runtime
> + * suspended state and unmask the same in the runtime resume.
> + * If INTx has already been masked by the user, then
> + * vfio_pci_intx_mask() will return false and in that case, INTx
> + * should not be unmasked in the runtime resume.
> + */
> + vdev->pm_intx_masked = (is_intx(vdev) && vfio_pci_intx_mask(vdev));
> +
> + return 0;
> +}
> +
> +static int vfio_pci_core_runtime_resume(struct device *dev)
> +{
> + struct vfio_pci_core_device *vdev = dev_get_drvdata(dev);
> +
> + if (vdev->pm_intx_masked)
> + vfio_pci_intx_unmask(vdev);
> +
> + return 0;
> +}
> +#endif /* CONFIG_PM */
> +
> /*
> - * The dev_pm_ops needs to be provided to make pci-driver runtime PM working,
> - * so use structure without any callbacks.
> - *
> * The pci-driver core runtime PM routines always save the device state
> * before going into suspended state. If the device is going into low power
> * state with only with runtime PM ops, then no explicit handling is needed
> * for the devices which have NoSoftRst-.
> */
> -static const struct dev_pm_ops vfio_pci_core_pm_ops = { };
> +static const struct dev_pm_ops vfio_pci_core_pm_ops = {
> + SET_RUNTIME_PM_OPS(vfio_pci_core_runtime_suspend,
> + vfio_pci_core_runtime_resume,
> + NULL)
> +};
>
> int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> {
> diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
> index 6069a11fb51a..1a37db99df48 100644
> --- a/drivers/vfio/pci/vfio_pci_intrs.c
> +++ b/drivers/vfio/pci/vfio_pci_intrs.c
> @@ -33,10 +33,12 @@ static void vfio_send_intx_eventfd(void *opaque, void *unused)
> eventfd_signal(vdev->ctx[0].trigger, 1);
> }
>
> -void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
> +/* Returns true if INTx has been masked by this function. */
> +bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
> {
> struct pci_dev *pdev = vdev->pdev;
> unsigned long flags;
> + bool intx_masked = false;
>
> spin_lock_irqsave(&vdev->irqlock, flags);
>
> @@ -60,9 +62,11 @@ void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
> disable_irq_nosync(pdev->irq);
>
> vdev->ctx[0].masked = true;
> + intx_masked = true;
> }
>
> spin_unlock_irqrestore(&vdev->irqlock, flags);
> + return intx_masked;
> }


There's certainly another path through this function that masks the
interrupt, which makes the definition of this return value a bit
confusing. Wouldn't it be simpler not to overload the masked flag on
the interrupt context like this and instead set a new flag on the vdev
under irqlock to indicate the device is unable to generate interrupts.
The irq handler would add a test of this flag before any tests that
would access the device. Thanks,

Alex

> /*
> diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
> index 23c176d4b073..cdfd328ba6b1 100644
> --- a/include/linux/vfio_pci_core.h
> +++ b/include/linux/vfio_pci_core.h
> @@ -124,6 +124,7 @@ struct vfio_pci_core_device {
> bool needs_reset;
> bool nointx;
> bool needs_pm_restore;
> + bool pm_intx_masked;
> struct pci_saved_state *pci_saved_state;
> struct pci_saved_state *pm_save;
> int ioeventfds_nr;
> @@ -147,7 +148,7 @@ struct vfio_pci_core_device {
> #define is_irq_none(vdev) (!(is_intx(vdev) || is_msi(vdev) || is_msix(vdev)))
> #define irq_is(vdev, type) (vdev->irq_type == type)
>
> -extern void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
> +extern bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
> extern void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
>
> extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev,