Re: [RESEND PATCH] mm, oom_reaper: gather each vma to prevent leaking TLB entry

From: Minchan Kim
Date: Thu Nov 09 2017 - 19:19:42 EST


On Tue, Nov 07, 2017 at 09:54:53AM +0000, Wang Nan wrote:
> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
> space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
> flush TLB when tlb->fullmm is true:
>
> commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").
>
> Which makes leaking of tlb entries.

That means soft-dirty which has used tlb_gather_mmu with fullmm could be
broken via losing write-protection bit once it supports arm64 in future?

If so, it would be better to use TASK_SIZE rather than -1 in tlb_gather_mmu.
Of course, it's a off-topic.

However, I want to add a big fat comment in tlb_gather_mmu to warn "TLB
flushing with (0, -1) can be skipped on some architectures" so upcoming
users can care of.

Thanks.

>
> Will clarifies his patch:
>
> > Basically, we tag each address space with an ASID (PCID on x86) which
> > is resident in the TLB. This means we can elide TLB invalidation when
> > pulling down a full mm because we won't ever assign that ASID to another mm
> > without doing TLB invalidation elsewhere (which actually just nukes the
> > whole TLB).
> >
> > I think that means that we could potentially not fault on a kernel uaccess,
> > because we could hit in the TLB.
>
> There could be a window between complete_signal() sending IPI to other
> cores and all threads sharing this mm are really kicked off from cores.
> In this window, the oom reaper may calls tlb_flush_mmu_tlbonly() to flush
> TLB then frees pages. However, due to the above problem, the TLB entries
> are not really flushed on arm64. Other threads are possible to access
> these pages through TLB entries. Moreover, a copy_to_user() can also
> write to these pages without generating page fault, causes use-after-free
> bugs.
>
> This patch gathers each vma instead of gathering full vm space.
> In this case tlb->fullmm is not true. The behavior of oom reaper become
> similar to munmapping before do_exit, which should be safe for all archs.
>
> Signed-off-by: Wang Nan <wangnan0@xxxxxxxxxx>
> Cc: Bob Liu <liubo95@xxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxx>
> Cc: David Rientjes <rientjes@xxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Roman Gushchin <guro@xxxxxx>
> Cc: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx>
> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> ---
> mm/oom_kill.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index dee0f75..18c5b35 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
> */
> set_bit(MMF_UNSTABLE, &mm->flags);
>
> - tlb_gather_mmu(&tlb, mm, 0, -1);
> for (vma = mm->mmap ; vma; vma = vma->vm_next) {
> if (!can_madv_dontneed_vma(vma))
> continue;
> @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
> * we do not want to block exit_mmap by keeping mm ref
> * count elevated without a good reason.
> */
> - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
> + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
> + tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);
> unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
> NULL);
> + tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);
> + }
> }
> - tlb_finish_mmu(&tlb, 0, -1);
> pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
> task_pid_nr(tsk), tsk->comm,
> K(get_mm_counter(mm, MM_ANONPAGES)),
> --
> 2.10.1
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxxx For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>