Re: [PATCH v3] mm, drm/i915: mark pinned shmemfs pages as unevictable

From: Michal Hocko
Date: Fri Nov 02 2018 - 14:26:21 EST


On Fri 02-11-18 20:35:11, Vovo Yang wrote:
> On Thu, Nov 1, 2018 at 9:10 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> > OK, so that explain my question about the test case. Even though you
> > generate a lot of page cache, the amount is still too small to trigger
> > pagecache mostly reclaim and anon LRUs are scanned as well.
> >
> > Now to the difference with the previous version which simply set the
> > UNEVICTABLE flag on mapping. Am I right assuming that pages are already
> > at LRU at the time? Is there any reason the mapping cannot have the flag
> > set before they are added to the LRU?
>
> I checked again. When I run gem_syslatency, it sets unevictable flag
> first and then adds pages to LRU, so my explanation to the previous
> test result is wrong. It should not be necessary to explicitly move
> these pages to unevictable list for this test case.

OK, that starts to make sense finally.

> The performance
> improvement of this patch on kbl might be due to not calling
> shmem_unlock_mapping.

Yes that one can get quite expensive. find_get_entries is really
pointless here because you already do have your pages. Abstracting
check_move_unevictable_pages into a pagevec api sounds like a reasonable
compromise between the code duplication and relatively low-level api to
export.

> The perf result of a shmem lock test shows find_get_entries is the
> most expensive part of shmem_unlock_mapping.
> 85.32%--ksys_shmctl
> shmctl_do_lock
> --85.29%--shmem_unlock_mapping
> |--45.98%--find_get_entries
> | --10.16%--radix_tree_next_chunk
> |--16.78%--check_move_unevictable_pages
> |--16.07%--__pagevec_release
> | --15.67%--release_pages
> | --4.82%--free_unref_page_list
> |--4.38%--pagevec_remove_exceptionals
> --0.59%--_cond_resched

--
Michal Hocko
SUSE Labs