Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support

From: Marcelo Tosatti
Date: Mon Mar 07 2022 - 15:48:23 EST


On Thu, Mar 03, 2022 at 11:45:50AM +0000, Mel Gorman wrote:
> On Tue, Feb 08, 2022 at 11:07:48AM +0100, Nicolas Saenz Julienne wrote:
> > This series replaces mm/page_alloc's per-cpu page lists drain mechanism with
> > one that allows accessing the lists remotely. Currently, only a local CPU is
> > permitted to change its per-cpu lists, and it's expected to do so, on-demand,
> > whenever a process demands it by means of queueing a drain task on the local
> > CPU. This causes problems for NOHZ_FULL CPUs and real-time systems that can't
> > take any sort of interruption and to some lesser extent inconveniences idle and
> > virtualised systems.
> >
>
> I know this has been sitting here for a long while. Last few weeks have
> not been fun.
>
> > Note that this is not the first attempt at fixing this per-cpu page lists:
> > - The first attempt[1] tried to conditionally change the pagesets locking
> > scheme based the NOHZ_FULL config. It was deemed hard to maintain as the
> > NOHZ_FULL code path would be rarely tested. Also, this only solves the issue
> > for NOHZ_FULL setups, which isn't ideal.
> > - The second[2] unanimously switched the local_locks to per-cpu spinlocks. The
> > performance degradation was too big.
> >
>
> For unrelated reasons I looked at using llist to avoid locks entirely. It
> turns out it's not possible and needs a lock. We know "local_locks to
> per-cpu spinlocks" took a large penalty so I considered alternatives on
> how a lock could be used. I found it's possible to both remote drain
> the lists and avoid the disable/enable of IRQs entirely as long as a
> preempting IRQ is willing to take the zone lock instead (should be very
> rare). The IRQ part is a bit hairy though as softirqs are also a problem
> and preempt-rt needs different rules and the llist has to sort PCP
> refills which might be a loss in total. However, the remote draining may
> still be interesting. The full series is at
> https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git/ mm-pcpllist-v1r2
>
> It's still waiting on tests to complete and not all the changelogs are
> complete which is why it's not posted.
>
> This is a comparison of vanilla vs "local_locks to per-cpu spinlocks"
> versus the git series up to "mm/page_alloc: Remotely drain per-cpu lists"
> for the page faulting microbench I originally complained about. The test
> machine is a 2-socket CascadeLake machine.
>
> pft timings
> 5.17.0-rc5 5.17.0-rc5 5.17.0-rc5
> vanilla mm-remotedrain-v2r1 mm-pcpdrain-v1r1
> Amean elapsed-1 32.54 ( 0.00%) 33.08 * -1.66%* 32.82 * -0.86%*
> Amean elapsed-4 8.66 ( 0.00%) 9.24 * -6.72%* 8.69 * -0.38%*
> Amean elapsed-7 5.02 ( 0.00%) 5.43 * -8.16%* 5.05 * -0.55%*
> Amean elapsed-12 3.07 ( 0.00%) 3.38 * -10.00%* 3.09 * -0.72%*
> Amean elapsed-21 2.36 ( 0.00%) 2.38 * -0.89%* 2.19 * 7.39%*
> Amean elapsed-30 1.75 ( 0.00%) 1.87 * -6.50%* 1.62 * 7.59%*
> Amean elapsed-48 1.71 ( 0.00%) 2.00 * -17.32%* 1.71 ( -0.08%)
> Amean elapsed-79 1.56 ( 0.00%) 1.62 * -3.84%* 1.56 ( -0.02%)
> Amean elapsed-80 1.57 ( 0.00%) 1.65 * -5.31%* 1.57 ( -0.04%)
>
> Note the local_lock conversion took 1 1-17% penalty while the git tree
> takes a negligile penalty while still allowing remote drains. It might
> have some potential while being less complex than the RCU approach.

Nice!

Hopefully a spinlock can be added to "struct lru_pvecs" without
degradation in performance, similarly to what is done here.