Re: [RFC PATCH 0/8] mm/madvise: support process_madvise(MADV_DONTNEED)

From: David Hildenbrand
Date: Mon Sep 27 2021 - 13:06:04 EST


On 27.09.21 14:00, Nadav Amit wrote:


On Sep 27, 2021, at 3:58 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:

On 27.09.21 12:41, Nadav Amit wrote:
On Sep 27, 2021, at 2:24 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:

On 26.09.21 18:12, Nadav Amit wrote:
From: Nadav Amit <namit@xxxxxxxxxx>
The goal of these patches is to add support for
process_madvise(MADV_DONTNEED). Yet, in the process some (arguably)
useful cleanups, a bug fix and performance enhancements are performed.
The patches try to consolidate the logic across different behaviors, and
to a certain extent overlap/conflict with an outstanding patch that does
something similar [1]. This consolidation however is mostly orthogonal
to the aforementioned one and done in order to clarify what is done in
respect to locks and TLB for each behavior and to batch these operations
more efficiently on process_madvise().
process_madvise(MADV_DONTNEED) is useful for two reasons: (a) it allows
userfaultfd monitors to unmap memory from monitored processes; and (b)
it is more efficient than madvise() since it is vectored and batches TLB
flushes more aggressively.

MADV_DONTNEED on MAP_PRIVATE memory is a target-visible operation; this is very different to all the other process_madvise() calls we allow, which are merely hints, but the target cannot be broken . I don't think this is acceptable.
This is a fair point, which I expected, but did not address properly.
I guess an additional capability, such as CAP_SYS_PTRACE needs to be
required in this case. Would that ease your mind?

I think it would be slightly better, but I'm still missing a clear use case that justifies messing with the page tables of other processes in that way, especially with MAP_PRIVATE mappings. Can you maybe elaborate a bit on a) and b)?

Especially, why would a) make sense or be required? When would it be a good idea to zap random pages of a target process, especially with MAP_PRIVATE? How would the target use case make sure that the target process doesn't suddenly lose data? I would have assume that you can really only do something sane with uffd() if 1) the process decided to give up on some pages (madvise(DONTNEED)) b) the process hasn't touched these pages yet.

Can you also comment a bit more on b)? Who cares about that? And would we suddenly expect users of madvise() to switch to process_madvise() because it's more effective? It sounds a bit weird to me TBH, but most probably I am missing details :)

Ok, ok, your criticism is fair. I tried to hold back some details in order to
prevent the discussion from digressing. I am going to focus on (a) which is
what I really have in mind.

Thanks for the details!


The use-case that I explore is a userspace memory manager with some level of
cooperation of the monitored processes.

The manager is notified on memory regions that it should monitor
(through PTRACE/LD_PRELOAD/explicit-API). It then monitors these regions
using the remote-userfaultfd that you saw on the second thread. When it wants
to reclaim (anonymous) memory, it:

1. Uses UFFD-WP to protect that memory (and for this matter I got a vectored
UFFD-WP to do so efficiently, a patch which I did not send yet).
2. Calls process_vm_readv() to read that memory of that process.
3. Write it back to “swap”.
4. Calls process_madvise(MADV_DONTNEED) to zap it.

Once the memory is accessed again, the manager uses UFFD-COPY to bring it
back. This is really work-in-progress, but eventually performance is not as
bad as you would imagine (some patches for efficient use of uffd with
iouring are needed for that matter).

Again, thanks for the details. I guess this should basically work, although it involves a lot of complexity (read: all flavors of uffd on other processes). And I am no so sure about performance aspects. "Performance is not as bad as you think" doesn't sound like the words you would want to hear from a car dealer ;) So there has to be another big benefit to do such user space swapping.


I am aware that there are some caveats, as zapping the memory does not
guarantee that the memory would be freed since it might be pinned for a
variety of reasons. That's the reason I mentioned the processes have "some
level of cooperation" with the manager. It is not intended to deal with
adversaries or uncommon corner cases (e.g., processes that use UFFD for
their own reasons).

It's not only long-term pinnings. Pages could have been de-duplicated (COW after fork, KSM, shared zeropage). Further, you'll most probably lose any kind of "aging" ("accessed") information on pages, or how would you track that?

Although I can see that this might work, I do wonder if it's a use case worth supporting. As Michal correctly raised, we already have other infrastructure in place to trigger swapin/swapout. I recall that also damon wants to let you write advanced policies for that by monitoring actual access characteristics.


Putting aside my use-case (which I am sure people would be glad to criticize),
I can imagine debuggers or emulators may also find use for similar schemes
(although I do not have concrete use-cases for them).

I'd be curious about use cases for debuggers/emulators. Especially for emulators I'd guess it makes more sense to just do it within the process. And for debuggers, I'm having a hard time why it would make sense to throw away a page instead of just overwriting it with $PATTERN (e.g., 0). But I'm sure people can be creative :)

--
Thanks,

David / dhildenb