Re: [PATCH v5 1/4] kvm: Extend irqfd to support level interrupts

From: Michael S. Tsirkin
Date: Wed Jul 18 2012 - 11:57:47 EST


On Wed, Jul 18, 2012 at 09:48:01AM -0600, Alex Williamson wrote:
> On Wed, 2012-07-18 at 18:38 +0300, Michael S. Tsirkin wrote:
> > On Wed, Jul 18, 2012 at 08:47:23AM -0600, Alex Williamson wrote:
> > > On Wed, 2012-07-18 at 15:07 +0300, Michael S. Tsirkin wrote:
> > > > On Wed, Jul 18, 2012 at 02:48:44PM +0300, Gleb Natapov wrote:
> > > > > On Wed, Jul 18, 2012 at 02:39:10PM +0300, Michael S. Tsirkin wrote:
> > > > > > On Wed, Jul 18, 2012 at 02:22:19PM +0300, Michael S. Tsirkin wrote:
> > > > > > > > > > > > > So as was discussed kvm_set_irq under spinlock is bad for scalability
> > > > > > > > > > > > > with multiple VCPUs. Why do we need a spinlock simply to protect
> > > > > > > > > > > > > level_asserted? Let's use an atomic test and set/test and clear and the
> > > > > > > > > > > > > problem goes away.
> > > > > > > > > > > > >
> > > > > > > > > > > > That sad reality is that for level interrupt we already scan all vcpus
> > > > > > > > > > > > under spinlock.
> > > > > > > > > > >
> > > > > > > > > > > Where?
> > > > > > > > > > >
> > > > > > > > > > ioapic
> > > > > > > > >
> > > > > > > > > $ grep kvm_for_each_vcpu virt/kvm/ioapic.c
> > > > > > > > > $
> > > > > > > > >
> > > > > > > > > ?
> > > > > > > > >
> > > > > > > >
> > > > > > > > Come on Michael. You can do better than grep and actually look at what
> > > > > > > > code does. The code that loops over all vcpus while delivering an irq is
> > > > > > > > in kvm_irq_delivery_to_apic(). Now grep for that.
> > > > > > >
> > > > > > > Hmm, I see, it's actually done for edge if injected from ioapic too,
> > > > > > > right?
> > > > > > >
> > > > > > > So set_irq does a linear scan, and for each matching CPU it calls
> > > > > > > kvm_irq_delivery_to_apic which is another scan?
> > > > > > > So it's actually N^2 worst case for a broadcast?
> > > > > >
> > > > > > No it isn't, I misread the code.
> > > > > >
> > > > > >
> > > > > > Anyway, maybe not trivially but this looks fixable to me: we could drop
> > > > > > the ioapic lock before calling kvm_irq_delivery_to_apic.
> > > > > >
> > > > > May be, may be not. Just saying "lets drop lock whenever we don't feel
> > > > > like holding one" does not cut it.
> > > >
> > > > One thing we do is set remote_irr if interrupt was injected.
> > > > I agree these things are tricky.
> > > >
> > > > One other question:
> > > >
> > > > static int ioapic_service(struct kvm_ioapic *ioapic, unsigned int idx)
> > > > {
> > > > union kvm_ioapic_redirect_entry *pent;
> > > > int injected = -1;
> > > >
> > > > pent = &ioapic->redirtbl[idx];
> > > >
> > > > if (!pent->fields.mask) {
> > > > injected = ioapic_deliver(ioapic, idx);
> > > > if (injected && pent->fields.trig_mode == IOAPIC_LEVEL_TRIG)
> > > > pent->fields.remote_irr = 1;
> > > > }
> > > >
> > > > return injected;
> > > > }
> > > >
> > > >
> > > > This if (injected) looks a bit strange since ioapic_deliver returns
> > > > -1 if no matching destinations. Should be if (injected > 0)?
> > > >
> > > >
> > > >
> > > > > Back to original point though current
> > > > > situation is that calling kvm_set_irq() under spinlock is not worse for
> > > > > scalability than calling it not under one.
> > > >
> > > > Yes. Still the specific use can just use an atomic flag,
> > > > lock+bool is not needed, and we won't need to undo it later.
> > >
> > >
> > > Actually, no, replacing it with an atomic is racy.
> > >
> > > CPU0 (inject) CPU1 (EOI)
> > > atomic_cmpxchg(&asserted, 0, 1)
> > > atomic_cmpxchg(&asserted, 1, 0)
> > > kvm_set_irq(0)
> > > kvm_set_irq(1)
> > > eventfd_signal
> > >
> > > The interrupt is now stuck on until another interrupt is injected.
> > >
> >
> > Well EOI somehow happened here before interrupt so it's a bug somewhere
> > else?
>
> Interrupts can be shared. We also can't guarantee that the guest won't
> write a bogus EOI to the ioapic. The irq ack notifier doesn't filter on
> irq source id... I'm not sure it can.

I guess if Avi OKs adding another kvm_set_irq under spinlock that's
the best we can do for now.

If not, maybe we can teach kvm_set_irq to return an indication
of the previous status. Specifically kvm_irq_line_state
could do test_and_set/test_and_clear and if already set/clear
we return 0 immediately.

--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/