Re: [PATCH RESEND 5/5] vhost-blk: Add vhost-blk support

From: Asias He
Date: Fri Jul 20 2012 - 21:07:03 EST

On 07/21/2012 04:56 AM, Anthony Liguori wrote:
"Michael S. Tsirkin" <mst@xxxxxxxxxx> writes:

On Thu, Jul 19, 2012 at 08:05:42AM -0500, Anthony Liguori wrote:
Of course, the million dollar question is why would using AIO in the
kernel be faster than using AIO in userspace?

Actually for me a more important question is how does it compare
with virtio-blk dataplane?

I'm not even asking for a benchmark comparision. It's the same API
being called from a kernel thread vs. a userspace thread. Why would
there be a 60% performance difference between the two? That doesn't
make any sense.

Please read the commit log again. I am not saying vhost-blk v.s userspace implementation gives 60% improvement. I am saying the vhost-blk v.s original vhost-blk gives 60% improvement.

This patch is based on Liu Yuan's implementation with various
improvements and bug fixes. Notably, this patch makes guest notify and
host completion processing in parallel which gives about 60% performance
improvement compared to Liu Yuan's implementation.

There's got to be a better justification for putting this in the kernel
than just that we can.

I completely understand why Christoph's suggestion of submitting BIOs
directly would be faster. There's no way to do that in userspace.

Well. With Zach and Dave's new in-kernel aio API, the aio usage in kernel is much simpler than in userspace. This a potential reason that in kernel one is better than userspace one. I am working on it right now. And for block based image, as suggested by Christoph, we can submit bio directly. This is another potential reason.

Why can't we just go further to see if we can improve the IO stack from guest kernel side all the way down to host kernel side. We can not do that if we stick to doing everything in userspace (qemu).


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at