Re: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops -22.7% regression
From: Feng Tang
Date: Wed Nov 25 2020 - 01:25:13 EST
On Fri, Nov 20, 2020 at 07:44:24PM +0800, Feng Tang wrote:
> On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> > > I would rather focus on a more effective mem_cgroup layout. It is very
> > > likely that we are just stumbling over two counters here.
> > >
> > > Could you try to add cache alignment of counters after memory and see
> > > which one makes the difference? I do not expect memsw to be the one
> > > because that one is used together with the main counter. But who knows
> > > maybe the way it crosses the cache line has the exact effect. Hard to
> > > tell without other numbers.
> >
> > I added some alignments change around the 'memsw', but neither of them can
> > restore the -22.7%. Following are some log showing how the alignments
> > are:
> >
> > tl: memcg=0x7cd1000 memory=0x7cd10d0 memsw=0x7cd1140 kmem=0x7cd11b0 tcpmem=0x7cd1220
> > t2: memcg=0x7cd0000 memory=0x7cd00d0 memsw=0x7cd0140 kmem=0x7cd01c0 tcpmem=0x7cd0230
> >
> > So both of the 'memsw' are aligned, but t2's 'kmem' is aligned while
> > t1's is not.
> >
> > I will check more on the perf data about detailed hotspots.
>
> Some more check updates about it:
>
> Waiman's patch is effectively removing one 'struct page_counter' between
> 'memory' and "memsw'. And the mem_cgroup is:
>
> struct mem_cgroup {
>
> ...
>
> struct page_counter memory; /* Both v1 & v2 */
>
> union {
> struct page_counter swap; /* v2 only */
> struct page_counter memsw; /* v1 only */
> };
>
> /* Legacy consumer-oriented counters */
> struct page_counter kmem; /* v1 only */
> struct page_counter tcpmem; /* v1 only */
>
> ...
> ...
>
> MEMCG_PADDING(_pad1_);
>
> atomic_t moving_account;
> struct task_struct *move_lock_task;
>
> ...
> };
>
>
> I do experiments by inserting a 'page_counter' between 'memory'
> and the 'MEMCG_PADDING(_pad1_)', no matter where I put it, the
> benchmark result can be recovered from 145K to 185K, which is
> really confusing, as adding a 'page_counter' right before the
> '_pad1_' doesn't change cache alignment of any members.
I think we finally found the trick :), further debugging shows it
is not related to the alignment inside one cacheline, but the
adjacency of 2 adjacent cacheliens (2N and 2N+1, one pair of 128 bytes).
For structure mem_cgroup, member 'vmstats_local', 'vmstats_percpu'
sit in one cacheline, while 'vmstats[]' sits in the next cacheline,
and when 'adjacent cacheline prefetch" is enabled, if these 2 lines
sit in one pair (128 btyes), say 2N and 2N+1, then there seems to
be some kind of false sharing, and if they sit in 2 pairs, say
2N-1 and 2N then it's fine.
And with the following patch to relayout these members, the regression
is restored and event better. while reducing 64 bytes of sizeof
'struct mem_cgroup'
parent_commit Waiman's_commit +relayout patch
result 187K 145K 200K
Also, if we disable the hw prefetch feature, the Waiman's commit
and its parent commit will have no performance difference.
Thanks,
Feng