Re: [RFC PATCH 0/8] mm/madvise: support process_madvise(MADV_DONTNEED)

From: Nadav Amit
Date: Mon Sep 27 2021 - 15:59:48 EST




> On Sep 27, 2021, at 10:05 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 27.09.21 14:00, Nadav Amit wrote:
>>> On Sep 27, 2021, at 3:58 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:
>>>
>>> On 27.09.21 12:41, Nadav Amit wrote:
>>>>> On Sep 27, 2021, at 2:24 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:
>>>>>
>>>>> On 26.09.21 18:12, Nadav Amit wrote:
>>>>>> From: Nadav Amit <namit@xxxxxxxxxx>
>>>>>> The goal of these patches is to add support for
>>>>>> process_madvise(MADV_DONTNEED). Yet, in the process some (arguably)
>>>>>> useful cleanups, a bug fix and performance enhancements are performed.
>>>>>> The patches try to consolidate the logic across different behaviors, and
>>>>>> to a certain extent overlap/conflict with an outstanding patch that does
>>>>>> something similar [1]. This consolidation however is mostly orthogonal
>>>>>> to the aforementioned one and done in order to clarify what is done in
>>>>>> respect to locks and TLB for each behavior and to batch these operations
>>>>>> more efficiently on process_madvise().
>>>>>> process_madvise(MADV_DONTNEED) is useful for two reasons: (a) it allows
>>>>>> userfaultfd monitors to unmap memory from monitored processes; and (b)
>>>>>> it is more efficient than madvise() since it is vectored and batches TLB
>>>>>> flushes more aggressively.
>>>>>
>>>>> MADV_DONTNEED on MAP_PRIVATE memory is a target-visible operation; this is very different to all the other process_madvise() calls we allow, which are merely hints, but the target cannot be broken . I don't think this is acceptable.
>>>> This is a fair point, which I expected, but did not address properly.
>>>> I guess an additional capability, such as CAP_SYS_PTRACE needs to be
>>>> required in this case. Would that ease your mind?
>>>
>>> I think it would be slightly better, but I'm still missing a clear use case that justifies messing with the page tables of other processes in that way, especially with MAP_PRIVATE mappings. Can you maybe elaborate a bit on a) and b)?
>>>
>>> Especially, why would a) make sense or be required? When would it be a good idea to zap random pages of a target process, especially with MAP_PRIVATE? How would the target use case make sure that the target process doesn't suddenly lose data? I would have assume that you can really only do something sane with uffd() if 1) the process decided to give up on some pages (madvise(DONTNEED)) b) the process hasn't touched these pages yet.
>>>
>>> Can you also comment a bit more on b)? Who cares about that? And would we suddenly expect users of madvise() to switch to process_madvise() because it's more effective? It sounds a bit weird to me TBH, but most probably I am missing details :)
>> Ok, ok, your criticism is fair. I tried to hold back some details in order to
>> prevent the discussion from digressing. I am going to focus on (a) which is
>> what I really have in mind.
>
> Thanks for the details!
>
>> The use-case that I explore is a userspace memory manager with some level of
>> cooperation of the monitored processes.
>> The manager is notified on memory regions that it should monitor
>> (through PTRACE/LD_PRELOAD/explicit-API). It then monitors these regions
>> using the remote-userfaultfd that you saw on the second thread. When it wants
>> to reclaim (anonymous) memory, it:
>> 1. Uses UFFD-WP to protect that memory (and for this matter I got a vectored
>> UFFD-WP to do so efficiently, a patch which I did not send yet).
>> 2. Calls process_vm_readv() to read that memory of that process.
>> 3. Write it back to “swap”.
>> 4. Calls process_madvise(MADV_DONTNEED) to zap it.
>> Once the memory is accessed again, the manager uses UFFD-COPY to bring it
>> back. This is really work-in-progress, but eventually performance is not as
>> bad as you would imagine (some patches for efficient use of uffd with
>> iouring are needed for that matter).
>
> Again, thanks for the details. I guess this should basically work, although it involves a lot of complexity (read: all flavors of uffd on other processes). And I am no so sure about performance aspects. "Performance is not as bad as you think" doesn't sound like the words you would want to hear from a car dealer ;) So there has to be another big benefit to do such user space swapping.

There is some complexity, indeed. Worse, there are some quirks of UFFD
that make life hard for no reason and some uffd and iouring bugs.

As for my sales pitch - I agree that I am not the best car dealer… :(
When I say performance is not bad, I mean that the core operations of
page-fault handling, prefetch and reclaim do not induce high overhead
*after* the improvements I sent or mentioned.

The benefit of doing so from userspace is that you have full control
over the reclaim/prefetch policies, so you may be able to make better
decisions.

Some workloads have predictable access patterns (see for instance "MAGE:
Nearly Zero-Cost Virtual Memory for Secure Computation”, OSDI’21). You may
be handle such access patterns without requiring intrusive changes to the
workload.


>
>> I am aware that there are some caveats, as zapping the memory does not
>> guarantee that the memory would be freed since it might be pinned for a
>> variety of reasons. That's the reason I mentioned the processes have "some
>> level of cooperation" with the manager. It is not intended to deal with
>> adversaries or uncommon corner cases (e.g., processes that use UFFD for
>> their own reasons).
>
> It's not only long-term pinnings. Pages could have been de-duplicated (COW after fork, KSM, shared zeropage). Further, you'll most probably lose any kind of "aging" ("accessed") information on pages, or how would you track that?

I know it’s not just long-term pinnings. That’s what “variety of reasons”
stood for. ;-)

Aging is a tool for certain types of reclamation policies. Some do not
require it (e.g., random). You can also have compiler/application-guided
reclamation policies. If you are really into “aging”, you may be able
to use PEBS or other CPU facilities to track it.

Anyhow, the access-bit by itself not such a great solution to track
aging. Setting it can induce overheads of >500 cycles from my (and
others) experience.

>
> Although I can see that this might work, I do wonder if it's a use case worth supporting. As Michal correctly raised, we already have other infrastructure in place to trigger swapin/swapout. I recall that also damon wants to let you write advanced policies for that by monitoring actual access characteristics.

Hints, as those that Michal mentioned, prevent the efficient use of
userfaultfd. Using MADV_PAGEOUT will not trigger another uffd event
when the page is brought back from swap. So using
MADV_PAGEOUT/MADV_WILLNEED does not allow you to have a custom
prefetch policy, for instance. It would also require you to live
with the kernel reclamation/IO stack for better and worse.

As for DAMON, I am not very familiar with it, but from what I remember
it seemed to look on a similar direction. IMHO it is more intrusive
and less configurable (although it can have the advantage of better
integration with various kernel mechanism). I was wondering for a
second why you give me such a hard time for a pretty straight-forward
extension for process_madvise(), but then I remembered that DAMON got
into the kernel after >30 versions, so I’ll shut up about that. ;-)

>
>> Putting aside my use-case (which I am sure people would be glad to criticize),
>> I can imagine debuggers or emulators may also find use for similar schemes
>> (although I do not have concrete use-cases for them).
>
> I'd be curious about use cases for debuggers/emulators. Especially for emulators I'd guess it makes more sense to just do it within the process. And for debuggers, I'm having a hard time why it would make sense to throw away a page instead of just overwriting it with $PATTERN (e.g., 0). But I'm sure people can be creative :)

I have some more vague ideas, but I am afraid that you will keep
saying that it makes more sense to handle such events from within
a process. I am not sure that this is true. Even for the emulators
that we discuss, the emulated program might run in a different
address space (for sandboxing). You may be able to avoid the need
for remote-UFFD and get away with the current non-cooperative
UFFD, but zapping the memory (for atomic updates) would still
require process_madvise(MADV_DONTNEED) [putting aside various
ptrace solutions].

Anyhow, David, I really appreciate your feedback. And you make
strong points about issues I encounter. Yet, eventually, I think
that the main question in this discussion is whether enabling
process_madvise(MADV_DONTNEED) is any different - from security
point of view - than process_vm_writev(), not to mention ptrace.
If not, then the same security guards should suffice, I would
argue.