Re: [PATCH 2/2] Drivers: hv: balloon: Support 2M page allocationsfor ballooning

From: Michal Hocko
Date: Mon Mar 18 2013 - 10:13:13 EST


On Mon 18-03-13 13:44:05, KY Srinivasan wrote:
>
>
> > -----Original Message-----
> > From: Michal Hocko [mailto:mhocko@xxxxxxx]
> > Sent: Monday, March 18, 2013 6:53 AM
> > To: KY Srinivasan
> > Cc: gregkh@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > devel@xxxxxxxxxxxxxxxxxxxxxx; olaf@xxxxxxxxx; apw@xxxxxxxxxxxxx;
> > andi@xxxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> > kamezawa.hiroyuki@xxxxxxxxx; hannes@xxxxxxxxxxx; yinghan@xxxxxxxxxx
> > Subject: Re: [PATCH 2/2] Drivers: hv: balloon: Support 2M page allocations for
> > ballooning
> >
> > On Sat 16-03-13 14:42:05, K. Y. Srinivasan wrote:
> > > While ballooning memory out of the guest, attempt 2M allocations first.
> > > If 2M allocations fail, then go for 4K allocations. In cases where we
> > > have performed 2M allocations, split this 2M page so that we can free this
> > > page at 4K granularity (when the host returns the memory).
> >
> > Maybe I am missing something but what is the advantage of 2M allocation
> > when you split it up immediately so you are not using it as a huge page?
>
> The Hyper-V ballooning protocol specifies the pages being ballooned as
> page ranges - start_pfn: number_of_pfns. So, when the guest balloon
> is inflating and I am able to allocate 2M pages, I will be able to
> represent 512 contiguous pages in one 64 bit entry and this makes the
> ballooning operation that much more efficient. The reason I split the
> page is that the host does not guarantee that when it returns the
> memory to the guest, it will return in any particular granularity and
> so I have to be able to free this memory in 4K granularity. This is
> the corner case that I will have to handle.

Thanks for the clarification. I think this information would be valuable
in the changelog.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/