Re: [PATCH 7/9] virtio-pci: harden INTX interrupts

From: Boqun Feng
Date: Tue Sep 14 2021 - 05:34:42 EST


On Mon, Sep 13, 2021 at 11:36:24PM +0200, Thomas Gleixner wrote:
[...]
> As the device startup is not really happening often it's sensible to do
> the following
>
> disable_irq();
> vp_dev->intx_soft_enabled = true;
> enable_irq();
>
> because:
>
> disable_irq()
> synchronize_irq()
>
> acts as a barrier for the preceeding stores:
>
> disable_irq()
> raw_spin_lock(desc->lock);
> __disable_irq(desc);
> raw_spin_unlock(desc->lock);
>
> synchronize_irq()
> do {
> raw_spin_lock(desc->lock);
> in_progress = check_inprogress(desc);
> raw_spin_unlock(desc->lock);
> } while (in_progress);
>
> intx_soft_enabled = true;
>
> enable_irq();
>
> In this case synchronize_irq() prevents the subsequent store to
> intx_soft_enabled to leak into the __disable_irq(desc) section which in
> turn makes it impossible for an interrupt handler to observe
> intx_soft_enabled == true before the prerequisites which preceed the
> call to disable_irq() are visible.
>

Right. In our memory model, raw_spin_unlock(desc->lock) +
raw_spin_lock(desc->lock) provides the so-call RCtso ordering, that is
for the following code:

A
...
raw_spin_unlock(desc->lock);
...
raw_spin_lock(desc->lock);
...
B

Memory accesses A and B will not be reordered unless A is a store and B
is a load. Such an ordering guarantee fulfils the requirement here.

For more information, see the LOCKING section of
tools/memory-model/Documentation/explanation.txt

Regards,
Boqun

> Of course the memory ordering wizards might disagree, but if they do,
> then we have a massive chase of ordering problems vs. similar constructs
> all over the tree ahead of us.
>
> From the interrupt perspective the sequence:
>
> disable_irq();
> vp_dev->intx_soft_enabled = true;
> enable_irq();
>
> is perfectly fine as well. Any interrupt arriving during the disabled
> section will be reraised on enable_irq() in hardware because it's a
> level interrupt. Any resulting failure is either a hardware or a
> hypervisor bug.
>
> Thanks,
>
> tglx