[PATCH] mm: throttle LRU pages skipping on rmap_lock contention

From: Minchan Kim
Date: Thu May 26 2022 - 13:08:55 EST


On Thu, May 12, 2022 at 12:55:16PM -0700, Minchan Kim wrote:
> On Wed, May 11, 2022 at 07:05:23PM -0700, Andrew Morton wrote:
> > On Wed, 11 May 2022 15:57:09 -0700 Minchan Kim <minchan@xxxxxxxxxx> wrote:
> >
> > > >
> > > > Could we burn much CPU time pointlessly churning though the LRU? Could
> > > > it mess up aging decisions enough to be performance-affecting in any
> > > > workload?
> > >
> > > Yes, correct. However, we are already churning LRUs by several
> > > ways. For example, isolate and putback from LRU list for page
> > > migration from several sources(typical example is compaction)
> > > and trylock_page and sc->gfp_mask not allowing page to be
> > > reclaimed in shrink_page_list.
> >
> > Well. "we're already doing a risky thing so it's OK to do more of that
> > thing"?
>
> I meant the aging is not rocket science.
>
>
> >
> > > >
> > > > Something else?
> > >
> > > One thing I am worry about was the granularity of the churning.
> > > Example above was page granuarity churning so might be execuse
> > > but this one is address space's churning, especically for file LRU
> > > (i_mmap_rwsem) which might cause too many rotating and live-lock
> > > in the end(keey rotating in small LRU with heavy memory pressure).
> > >
> > > If it could be a problem, maybe we use sc->priority to stop
> > > the skipping on a certain level of memory pressure.
> > >
> > > Any thought? Do we really need it?
> >
> > Are we able to think of a test which might demonstrate any worst case?
> > Whip that up and see what the numbers say?
>
> Yeah, let me create a worst test case to see how it goes.
>
> A thread keep reading a file-backed vma with 2xRAM file but other threads
> keep changing other vmas mapped at the same file so heavy i_mmap_rwsem
> contention in aging path.

Forking new thread

I checked what happens the worst case. I am not sure how the worst
case is realistic but would be great to have safety net.