Anthony Liguori wrote:Well please propose the virtio API first and then I'll adjust the PCI ABI. I don't want to build things into the ABI that we never actually end up using in virtio :-)
Move ->kick() to virtio_driver.
I believe Xen networking uses the same event channel for both rx and tx, so in effect they're using this model. Long time since I looked though,
I was thinking more along the lines that a hypercall-based device would certainly be implemented in-kernel whereas the current device is naturally implemented in userspace. We can simply use a different device for in-kernel drivers than for userspace drivers.
Where the device is implemented is an implementation detail that should be hidden from the guest, isn't that one of the strengths of virtualization? Two examples: a file-based block device implemented in qemu gives you fancy file formats with encryption and compression, while the same device implemented in the kernel gives you a low-overhead path directly to a zillion-disk SAN volume. Or a user-level network device capable of running with the slirp stack and no permissions vs. the kernel device running copyless most of the time and using a dma engine for the rest but requiring you to be good friends with the admin.
The user should expect zero reconfigurations moving a VM from one model to the other.
None of the PCI devices currently work like that in QEMU. It would be very hard to make a device that worked this way because since the order in which values are written matter a whole lot. For instance, if you wrote the status register before the queue information, the driver could get into a funky state.
I assume you're talking about restore? Isn't that atomic?
Not much of an argument, I know.
wrt. number of queues, 8 queues will consume 32 bytes of pci space if all you store is the ring pfn.
You also at least need a num argument which takes you to 48 or 64 depending on whether you care about strange formatting. 8 queues may not be enough either. Eric and I have discussed whether the 9p virtio device should support multiple mounts per-virtio device and if so, whether each one should have it's own queue. Any devices that supports this sort of multiplexing will very quickly start using a lot of queues.
Make it appear as a pci function? (though my feeling is that multiple mounts should be different devices; we can then hotplug mountpoints).
I think most types of hardware have some notion of a selector or mode. Take a look at the LSI adapter or even VGA.
True. They aren't fun to use, though.