Re: [PATCH] mm, oom: OOM killer use rss size without shmem

From: Michal Hocko
Date: Fri Feb 22 2019 - 02:10:09 EST


On Fri 22-02-19 13:37:33, Junil Lee wrote:
> The oom killer use get_mm_rss() function to estimate how free memory
> will be reclaimed when the oom killer select victim task.
>
> However, the returned rss size by get_mm_rss() function was changed from
> "mm, shmem: add internal shmem resident memory accounting" commit.
> This commit makes the get_mm_rss() return size including SHMEM pages.

This was actually the case even before eca56ff906bdd because SHMEM was
just accounted to MM_FILEPAGES so this commit hasn't changed much
really.

Besides that we cannot really rule out SHMEM pages simply. They are
backing MAP_ANON|MAP_SHARED which might be unmapped and freed during the
oom victim exit. Moreover this is essentially the same as file backed
pages or even MAP_PRIVATE|MAP_ANON pages. Bothe can be pinned by other
processes e.g. via private pages via CoW mappings and file pages by
filesystem or simply mlocked by another process. So this really gross
evaluation will never be perfect. We would basically have to do exact
calculation of the freeable memory of each process and that is just not
feasible.

That being said, I do not think the patch is an improvement in that
direction. It just turnes one fuzzy evaluation by another that even
misses a lot of memory potentially.

> The oom killer can't get free memory from SHMEM pages directly after
> kill victim process, it leads to mis-calculate victim points.
>
> Therefore, make new API as get_mm_rss_wo_shmem() which returns the rss
> value excluding SHMEM_PAGES.
>
> Signed-off-by: Junil Lee <junil0814.lee@xxxxxxx>
> ---
> include/linux/mm.h | 6 ++++++
> mm/oom_kill.c | 4 ++--
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2d483db..bca3acc 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1701,6 +1701,12 @@ static inline int mm_counter(struct page *page)
> return mm_counter_file(page);
> }
>
> +static inline unsigned long get_mm_rss_wo_shmem(struct mm_struct *mm)
> +{
> + return get_mm_counter(mm, MM_FILEPAGES) +
> + get_mm_counter(mm, MM_ANONPAGES);
> +}
> +
> static inline unsigned long get_mm_rss(struct mm_struct *mm)
> {
> return get_mm_counter(mm, MM_FILEPAGES) +
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 3a24848..e569737 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -230,7 +230,7 @@ unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
> * The baseline for the badness score is the proportion of RAM that each
> * task's rss, pagetable and swap space use.
> */
> - points = get_mm_rss(p->mm) + get_mm_counter(p->mm, MM_SWAPENTS) +
> + points = get_mm_rss_wo_shmem(p->mm) + get_mm_counter(p->mm, MM_SWAPENTS) +
> mm_pgtables_bytes(p->mm) / PAGE_SIZE;
> task_unlock(p);
>
> @@ -419,7 +419,7 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
>
> pr_info("[%7d] %5d %5d %8lu %8lu %8ld %8lu %5hd %s\n",
> task->pid, from_kuid(&init_user_ns, task_uid(task)),
> - task->tgid, task->mm->total_vm, get_mm_rss(task->mm),
> + task->tgid, task->mm->total_vm, get_mm_rss_wo_shmem(task->mm),
> mm_pgtables_bytes(task->mm),
> get_mm_counter(task->mm, MM_SWAPENTS),
> task->signal->oom_score_adj, task->comm);
> --
> 2.6.2
>

--
Michal Hocko
SUSE Labs