Re: [GIT PULL] AlacrityVM guest drivers for 2.6.33

From: Gregory Haskins
Date: Wed Dec 23 2009 - 12:34:54 EST

On 12/23/09 1:15 AM, Kyle Moffett wrote:
> On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
> <gregory.haskins@xxxxxxxxx> wrote:
>> On 12/22/09 2:57 AM, Ingo Molnar wrote:
>>> * Gregory Haskins <gregory.haskins@xxxxxxxxx> wrote:
>>>> Actually, these patches have nothing to do with the KVM folks. [...]
>>> That claim is curious to me - the AlacrityVM host
>> It's quite simple, really. These drivers support accessing vbus, and
>> vbus is hypervisor agnostic. In fact, vbus isn't necessarily even
>> hypervisor related. It may be used anywhere where a Linux kernel is the
>> "io backend", which includes hypervisors like AlacrityVM, but also
>> userspace apps, and interconnected physical systems as well.
>> The vbus-core on the backend, and the drivers on the frontend operate
>> completely independent of the underlying hypervisor. A glue piece
>> called a "connector" ties them together, and any "hypervisor" specific
>> details are encapsulated in the connector module. In this case, the
>> connector surfaces to the guest side as a pci-bridge, so even that is
>> not hypervisor specific per se. It will work with any pci-bridge that
>> exposes a compatible ABI, which conceivably could be actual hardware.
> This is actually something that is of particular interest to me. I
> have a few prototype boards right now with programmable PCI-E
> host/device links on them; one of my long-term plans is to finagle
> vbus into providing multiple "virtual" devices across that single
> PCI-E interface.
> Specifically, I want to be able to provide virtual NIC(s), serial
> ports and serial consoles, virtual block storage, and possibly other
> kinds of interfaces. My big problem with existing virtio right now
> (although I would be happy to be proven wrong) is that it seems to
> need some sort of out-of-band communication channel for setting up
> devices, not to mention it seems to need one PCI device per virtual
> device.
> So I would love to be able to port something like vbus to my nify PCI
> hardware and write some backend drivers... then my PCI-E connected
> systems would dynamically provide a list of highly-efficient "virtual"
> devices to each other, with only one 4-lane PCI-E bus.

Hi Kyle,

We indeed have others that are doing something similar. I have CC'd Ira
who may be able to provide you more details. I would also point you at
the canonical example for what you would need to write to tie your
systems together. Its the "null connector", which you can find here:;a=blob;f=kernel/vbus/connectors/null.c;h=b6d16cb68b7e49e07528278bc9f5b73e1dac0c2f;hb=HEAD

Do not hesitate to ask any questions, though you may want to take the
conversation to the alacrityvm-devel list as to not annoy the current CC
list any further than I already have ;)

Kind Regards,

Attachment: signature.asc
Description: OpenPGP digital signature