Re: [PATCH RFC 00/10] Introduce lockless shrink_slab()

From: Kirill Tkhai
Date: Wed Aug 08 2018 - 06:18:40 EST


On 08.08.2018 08:39, Shakeel Butt wrote:
> On Tue, Aug 7, 2018 at 6:12 PM Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> wrote:
>>
>> Hi Kirill,
>>
>> On Tue, 07 Aug 2018 18:37:19 +0300 Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> wrote:
>>>
>>> After bitmaps of not-empty memcg shrinkers were implemented
>>> (see "[PATCH v9 00/17] Improve shrink_slab() scalability..."
>>> series, which is already in mm tree), all the evil in perf
>>> trace has moved from shrink_slab() to down_read_trylock().
>>> As reported by Shakeel Butt:
>>>
>>> > I created 255 memcgs, 255 ext4 mounts and made each memcg create a
>>> > file containing few KiBs on corresponding mount. Then in a separate
>>> > memcg of 200 MiB limit ran a fork-bomb.
>>> >
>>> > I ran the "perf record -ag -- sleep 60" and below are the results:
>>> > + 47.49% fb.sh [kernel.kallsyms] [k] down_read_trylock
>>> > + 30.72% fb.sh [kernel.kallsyms] [k] up_read
>>> > + 9.51% fb.sh [kernel.kallsyms] [k] mem_cgroup_iter
>>> > + 1.69% fb.sh [kernel.kallsyms] [k] shrink_node_memcg
>>> > + 1.35% fb.sh [kernel.kallsyms] [k] mem_cgroup_protected
>>> > + 1.05% fb.sh [kernel.kallsyms] [k] queued_spin_lock_slowpath
>>> > + 0.85% fb.sh [kernel.kallsyms] [k] _raw_spin_lock
>>> > + 0.78% fb.sh [kernel.kallsyms] [k] lruvec_lru_size
>>> > + 0.57% fb.sh [kernel.kallsyms] [k] shrink_node
>>> > + 0.54% fb.sh [kernel.kallsyms] [k] queue_work_on
>>> > + 0.46% fb.sh [kernel.kallsyms] [k] shrink_slab_memcg
>>>
>>> The patchset continues to improve shrink_slab() scalability and makes
>>> it lockless completely. Here are several steps for that:
>>
>> So do you have any numbers for after theses changes?
>>
>
> I will do the same experiment as before with these patches sometime
> this or next week.

Thanks, Shakeel!

> BTW Kirill, thanks for pushing this.
>
> regards,
> Shakeel
>