Re: [PATCH 0/2] eventfd: new EFD_STATE flag

From: Davide Libenzi
Date: Wed Aug 26 2009 - 13:45:22 EST


On Wed, 26 Aug 2009, Michael S. Tsirkin wrote:

> On Tue, Aug 25, 2009 at 02:57:01PM -0700, Davide Libenzi wrote:
> > On Tue, 25 Aug 2009, Michael S. Tsirkin wrote:
> >
> > > Yes, we don't want that. The best thing is to try to restate the problem
> > > in a way that is generic, and then either solve or best use existing
> > > solution. Right?
> > >
> > > I thought I had that, but apparently not. The reason I'm Cc-ing you is
> > > not to try and spam you until you give up and accept the patch, it's
> > > hoping that you see the pattern behind our usage, and help generalize
> > > it.
> > >
> > > If I understand it correctly, you believe this is not possible and so
> > > any solution will have to be in KVM? Or maybe I didn't state the problem
> > > clearly enough and should restate it?
> >
> > Please do.
> >
> >
> >
> > - Davide
>
>
> Problem looks like this:
>
> There are multiple processes (devices) where each has a condition
> (interrupt line) which it has logic to determine is either true or
> false.
>
> A single other process (hypervisor) is interested in a condition
> (interrupt level) which is a logical OR of all interrupt lines.
> On changes, an interrupt level value needs to be read and copied to
> guest virtual cpu.
>
> We also want ability to replace some or all processes above by a kernel
> components, with condition changes done potentially from hardware
> interrupt context.
>
>
> How we wanted to solve it with EFD_STATE: Share a separate eventfd
> between each device and the hypervisor. device sets state to either 0
> or 1. hypervisor polls all eventfds, reads interrupt line on changes,
> calculates the interrupt level and updates guest.
>
> Alternative solution: shared memory where each device writes interrupt
> line value. This makes setup more complex (need to share around much more
> than just an fd), and makes access from interrupt impossible unless we
> lock the memory (and locking userspace memory introduces yet another set
> of issues).

OK, if I get it correctly, there is one eventfd signaler (the device), and
one eventfd reader (the hypervisor), right?
Each hypervisor listens for multiple devices detecting state changes, and
associating the eventfd "line" to the IRQ number by some configuration
(ala PCI), right?



- Davide


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/