Re: [RFC 0/1] memfd: Support mapping to zero page on reading

From: David Hildenbrand
Date: Tue Jan 04 2022 - 09:45:02 EST


On 22.12.21 13:33, Peng Liang wrote:
> Hi all,
>
> Recently we are working on implementing CRIU [1] for QEMU based on
> Steven's work [2]. It will use memfd to allocate guest memory in order
> to restore (inherit) it in the new QEMU process. However, memfd will
> allocate a new page for reading while anonymous memory will map to zero
> page for reading. For QEMU, memfd may cause that all memory are
> allocated during the migration because QEMU will read all pages in
> migration. It may lead to OOM if over-committed memory is enabled,
> which is usually enabled in public cloud.

Hi,

it's the exact same problem as if just migrating a VM after inflating
the balloon, or after reporting free memory to the hypervisor via
virtio-balloon free page reporting.

Even populating the shared zero page still wastes CPU time and more
importantly memory for page tables. Further, you'll end up reading the
whole page to discover that you just populated the shared zeropage, far
from optimal. Instead of doing that dance, just check if there is
something worth reading at all.

You could simply sense if a page is actually populated before going
ahead and reading it for migration. I actually discussed that recently
with Dave Gilbert.

For anonymous memory it's pretty straight forward via
/proc/self/pagemap. For files you can use lseek.

https://lkml.kernel.org/r/20210923064618.157046-2-tiberiu.georgescu@xxxxxxxxxxx

Contains some details. There was a discussion to eventually have a
better bulk interface for it if it's necessary for performance.

--
Thanks,

David / dhildenb