Re: [PATCH 3/3] mm: Count list_lru_one::nr_items lockless

From: Vladimir Davydov
Date: Wed Aug 23 2017 - 04:27:24 EST


On Wed, Aug 23, 2017 at 11:00:56AM +0300, Kirill Tkhai wrote:
> On 22.08.2017 22:47, Vladimir Davydov wrote:
> > On Tue, Aug 22, 2017 at 03:29:35PM +0300, Kirill Tkhai wrote:
> >> During the reclaiming slab of a memcg, shrink_slab iterates
> >> over all registered shrinkers in the system, and tries to count
> >> and consume objects related to the cgroup. In case of memory
> >> pressure, this behaves bad: I observe high system time and
> >> time spent in list_lru_count_one() for many processes on RHEL7
> >> kernel (collected via $perf record --call-graph fp -j k -a):
> >>
> >> 0,50% nixstatsagent [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock
> >> 0,26% nixstatsagent [kernel.vmlinux] [k] shrink_slab [k] shrink_slab
> >> 0,23% nixstatsagent [kernel.vmlinux] [k] super_cache_count [k] super_cache_count
> >> 0,15% nixstatsagent [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock
> >> 0,15% nixstatsagent [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2
> >>
> >> 0,94% mysqld [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock
> >> 0,57% mysqld [kernel.vmlinux] [k] shrink_slab [k] shrink_slab
> >> 0,51% mysqld [kernel.vmlinux] [k] super_cache_count [k] super_cache_count
> >> 0,32% mysqld [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock
> >> 0,32% mysqld [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2
> >>
> >> 0,73% sshd [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock
> >> 0,35% sshd [kernel.vmlinux] [k] shrink_slab [k] shrink_slab
> >> 0,32% sshd [kernel.vmlinux] [k] super_cache_count [k] super_cache_count
> >> 0,21% sshd [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock
> >> 0,21% sshd [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2
> >
> > It would be nice to see how this is improved by this patch.
> > Can you try to record the traces on the vanilla kernel with
> > and without this patch?
>
> Sadly, the talk is about a production node, and it's impossible to use vanila kernel there.

I see :-( Then maybe you could try to come up with a contrived test?