Re: [PATCH] vmscan: retry without cache trim mode if nothing scanned

From: Shakeel Butt
Date: Wed Mar 10 2021 - 19:58:52 EST


On Wed, Mar 10, 2021 at 4:47 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> From: Huang Ying <ying.huang@xxxxxxxxx>
>
> In shrink_node(), to determine whether to enable cache trim mode, the
> LRU size is gotten via lruvec_page_state(). That gets the value from
> a per-CPU counter (mem_cgroup_per_node->lruvec_stat[]). The error of
> the per-CPU counter from CPU local counting and the descendant memory
> cgroups may cause some issues. We run into this in 0-Day performance
> test.
>
> 0-Day uses the RAM file system as root file system, so the number of
> the reclaimable file pages is very small. In the swap testing, the
> inactive file LRU list will become almost empty soon. But the size of
> the inactive file LRU list gotten from the per-CPU counter may keep a
> much larger value (say, 33, 50, etc.). This will enable cache trim
> mode, but nothing can be scanned in fact. The following pattern
> repeats for long time in the test,
>
> priority inactive_file_size cache_trim_mode
> 12 33 0
> 11 33 0
> ...
> 6 33 0
> 5 33 1
> ...
> 1 33 1
>
> That is, the cache_trim_mode will be enabled wrongly when the scan
> priority decreases to 5. And the problem will not be recovered for
> long time.
>
> It's hard to get the more accurate size of the inactive file list
> without much more overhead. And it's hard to estimate the error of
> the per-CPU counter too, because there may be many descendant memory
> cgroups. But after the actual scanning, if nothing can be scanned
> with the cache trim mode, it should be wrong to enable the cache trim
> mode. So we can retry with the cache trim mode disabled. This patch
> implement this policy.

Instead of playing with the already complicated heuristics, we should
improve the accuracy of the lruvec stats. Johannes already fixed the
memcg stats using rstat infrastructure and Tejun has suggestions on
how to use rstat infrastructure efficiently for lruvec stats at
https://lore.kernel.org/linux-mm/YCFgr300eRiEZwpL@xxxxxxxxxxxxxxx/.