Re: [virtio-dev] Re: [PATCH v7 kernel 3/5] virtio-balloon: implementation of VIRTIO_BALLOON_F_CHUNK_TRANSFER

From: David Hildenbrand
Date: Fri Mar 10 2017 - 08:26:38 EST


Am 10.03.2017 um 11:02 schrieb Wei Wang:
> On 03/08/2017 12:01 PM, Michael S. Tsirkin wrote:
>> On Fri, Mar 03, 2017 at 01:40:28PM +0800, Wei Wang wrote:
>>> From: Liang Li <liang.z.li@xxxxxxxxx>
>>>
>>> The implementation of the current virtio-balloon is not very
>>> efficient, because the pages are transferred to the host one by one.
>>> Here is the breakdown of the time in percentage spent on each
>>> step of the balloon inflating process (inflating 7GB of an 8GB
>>> idle guest).
>>>
>>> 1) allocating pages (6.5%)
>>> 2) sending PFNs to host (68.3%)
>>> 3) address translation (6.1%)
>>> 4) madvise (19%)
>>>
>>> It takes about 4126ms for the inflating process to complete.
>>> The above profiling shows that the bottlenecks are stage 2)
>>> and stage 4).
>>>
>>> This patch optimizes step 2) by transfering pages to the host in
>>> chunks. A chunk consists of guest physically continuous pages, and
>>> it is offered to the host via a base PFN (i.e. the start PFN of
>>> those physically continuous pages) and the size (i.e. the total
>>> number of the pages). A normal chunk is formated as below:
>>> -----------------------------------------------
>>> | Base (52 bit) | Size (12 bit)|
>>> -----------------------------------------------
>>> For large size chunks, an extended chunk format is used:
>>> -----------------------------------------------
>>> | Base (64 bit) |
>>> -----------------------------------------------
>>> -----------------------------------------------
>>> | Size (64 bit) |
>>> -----------------------------------------------
>>>
>>> By doing so, step 4) can also be optimized by doing address
>>> translation and madvise() in chunks rather than page by page.
>>>
>>> This optimization requires the negotation of a new feature bit,
>>> VIRTIO_BALLOON_F_CHUNK_TRANSFER.
>>>
>>> With this new feature, the above ballooning process takes ~590ms
>>> resulting in an improvement of ~85%.
>>>
>>> TODO: optimize stage 1) by allocating/freeing a chunk of pages
>>> instead of a single page each time.
>>>
>>> Signed-off-by: Liang Li <liang.z.li@xxxxxxxxx>
>>> Signed-off-by: Wei Wang <wei.w.wang@xxxxxxxxx>
>>> Suggested-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
>>> Cc: Michael S. Tsirkin <mst@xxxxxxxxxx>
>>> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
>>> Cc: Cornelia Huck <cornelia.huck@xxxxxxxxxx>
>>> Cc: Amit Shah <amit.shah@xxxxxxxxxx>
>>> Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
>>> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
>>> Cc: David Hildenbrand <david@xxxxxxxxxx>
>>> Cc: Liang Li <liliang324@xxxxxxxxx>
>>> Cc: Wei Wang <wei.w.wang@xxxxxxxxx>
>> Does this pass sparse? I see some endian-ness issues here.
>
> "pass sparse"- what does that mean?
> I didn't see any complaints from "make" on my machine.

https://kernel.org/doc/html/latest/dev-tools/sparse.html

Static code analysis. You have to run it explicitly.

--
Thanks,

David