Re: [PATCH 16/28] mm: kswapd backoff for shrinkers

From: Brian Foster
Date: Mon Nov 04 2019 - 14:59:03 EST


On Fri, Nov 01, 2019 at 10:46:06AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
>
> When kswapd reaches the end of the page LRU and starts hitting dirty
> pages, the logic in shrink_node() allows it to back off and wait for
> IO to complete, thereby preventing kswapd from scanning excessively
> and driving the system into swap thrashing and OOM conditions.
>
> When we have inode cache heavy workloads on XFS, we have exactly the
> same problem with reclaim inodes. The non-blocking kswapd reclaim
> will keep putting pressure onto the inode cache which is unable to
> make progress. When the system gets to the point where there is no
> pages in the LRU to free, there is no swap left and there are no
> clean inodes that can be freed, it will OOM. This has a specific
> signature in OOM:
>
> [ 110.841987] Mem-Info:
> [ 110.842816] active_anon:241 inactive_anon:82 isolated_anon:1
> active_file:168 inactive_file:143 isolated_file:0
> unevictable:2621523 dirty:1 writeback:8 unstable:0
> slab_reclaimable:564445 slab_unreclaimable:420046
> mapped:1042 shmem:11 pagetables:6509 bounce:0
> free:77626 free_pcp:2 free_cma:0
>
> In this case, we have about 500-600 pages left in teh LRUs, but we
> have ~565000 reclaimable slab pages still available for reclaim.
> Unfortunately, they are mostly dirty inodes, and so we really need
> to be able to throttle kswapd when shrinker progress is limited due
> to reaching the dirty end of the LRU...
>
> So, add a flag into the reclaim_state so if the shrinker decides it
> needs kswapd to back off and wait for a while (for whatever reason)
> it can do so.
>
> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> ---
> include/linux/swap.h | 1 +
> mm/vmscan.c | 10 +++++++++-
> 2 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index da0913e14bb9..76fc28f0e483 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -133,6 +133,7 @@ struct reclaim_state {
> unsigned long reclaimed_pages; /* pages freed by shrinkers */
> unsigned long scanned_objects; /* quantity of work done */
> unsigned long deferred_objects; /* work that wasn't done */
> + bool need_backoff; /* tell kswapd to slow down */
> };
>
> /*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 13c11e10c9c5..0f7d35820057 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2949,8 +2949,16 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> * implies that pages are cycling through the LRU
> * faster than they are written so also forcibly stall.
> */
> - if (sc->nr.immediate)
> + if (sc->nr.immediate) {
> congestion_wait(BLK_RW_ASYNC, HZ/10);
> + } else if (reclaim_state && reclaim_state->need_backoff) {
> + /*
> + * Ditto, but it's a slab cache that is cycling
> + * through the LRU faster than they are written
> + */
> + congestion_wait(BLK_RW_ASYNC, HZ/10);
> + reclaim_state->need_backoff = false;
> + }

Seems reasonable from a functional standpoint, but why not plug in to
the existing stall instead of duplicate it? E.g., add a corresponding
->nr_immediate field to reclaim_state rather than a bool, then transfer
that to the scan_control earlier in the function where we already check
for reclaim_state and handle transferring fields (or alternatively just
leave the bool and use it to bump the scan_control field). That seems a
bit more consistent with the page processing code, keeps the
reclaim_state resets in one place and also wouldn't leave us with an
if/else here for the same stall. Hm?

Brian

> }
>
> /*
> --
> 2.24.0.rc0
>