Re: [PATCH][v2.6.29][XEN] Return unused memory to hypervisor

From: Miroslav Rezanina
Date: Mon Sep 07 2009 - 08:42:04 EST

----- "Jeremy Fitzhardinge" <jeremy@xxxxxxxx> wrote:

> From: "Jeremy Fitzhardinge" <jeremy@xxxxxxxx>
> To: "Miroslav Rezanina" <mrezanin@xxxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, "Gianluca Guida" <gianluca.guida@xxxxxxxxxx>
> Sent: Thursday, August 20, 2009 6:39:02 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
> Subject: Re: [PATCH][v2.6.29][XEN] Return unused memory to hypervisor
> On 08/20/09 00:47, Miroslav Rezanina wrote:
> > there is handled e820 map in guest. However, this patch informs
> > hypervisor, that guest uses less memory than was assigned to it.
> > If hypervisor is not informed, memory is reserved for guest that
> > do not need it. If hypervisor is informed, he decrease memory
> > reservation for guest and unused memory is marked as free
> > for use by other guests.
> >
> Yes. But the guest will modify its own e820 map for a number of
> reasons; for example: reducing its own memory, or clearing a space
> for
> the PCI hole. In general we want to free any underlying pages which
> don't correspond to E820_RAM regions.
> J
> --

Hi Jeremy,
can you give me a practical example, where e820 map can have "hole" inside,
i.e. there will be block of memory not listed in e820 map that have listed
memory before and after it?
As I checked the source code, there is always removed memory from some point
till end of map, not from one adress till another. And I can't image how would
be such a case handled. Of course, there can be some special regions, as the PCI
hole, but these are marked as "Reserved".
There can be "reserved and returned" some inside memory, but this is already
handled by balloon driver. My patch returns memory that this driver can't use.
Miroslav Rezanina
Software Engineer - Virtualization Team - XEN kernel

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at