Re: [RFC PATCH 2/3] mm: Drop use of test_and_set_skip in favor of just setting skip

From: Alexander Duyck
Date: Mon Aug 17 2020 - 15:18:03 EST


On Sat, Aug 15, 2020 at 2:51 AM Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx> wrote:
>
>
>
> 在 2020/8/15 上午5:15, Alexander Duyck 写道:
> > On Fri, Aug 14, 2020 at 7:24 AM Alexander Duyck
> > <alexander.duyck@xxxxxxxxx> wrote:
> >>
> >> On Fri, Aug 14, 2020 at 12:19 AM Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx> wrote:
> >>>
> >>>
> >>>
> >>> 在 2020/8/13 下午12:02, Alexander Duyck 写道:
> >>>>
> >>>> Since we have dropped the late abort case we can drop the code that was
> >>>> clearing the LRU flag and calling page_put since the abort case will now
> >>>> not be holding a reference to a page.
> >>>>
> >>>> Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> >>>
> >>> seems the case-lru-file-mmap-read case drop about 3% on this patch in a rough testing.
> >>> on my 80 core machine.
> >>
> >> I'm not sure how it could have that much impact on the performance
> >> since the total effect would just be dropping what should be a
> >> redundant test since we tested the skip bit before we took the LRU
> >> bit, so we shouldn't need to test it again after.
> >>
> >> I finally got my test setup working last night. I'll have to do some
> >> testing in my environment and I can start trying to see what is going
> >> on.
> >
> > So I ran the case-lru-file-mmap-read a few times and I don't see how
> > it is supposed to be testing the compaction code. It doesn't seem like
> > compaction is running at least on my system as a result of the test
> > script.
>
> atteched my kernel config, it is used on mine machine,

I'm just wondering what the margin of error is on the tests you are
running. What is the variance between runs? I'm just wondering if 3%
falls into the range of noise or possible changes due to just code
shifting around?

In order for the code to have shown any change it needs to be run and
I didn't see the tests triggering compaction on my test system. I'm
wondering how much memory you have available in the system you were
testing on that the test was enough to trigger compaction?

> > I wonder if testing this code wouldn't be better done using
> > something like thpscale from the
> > mmtests(https://github.com/gormanm/mmtests)? It seems past changes to
> > the compaction code were tested using that, and the config script for
> > the test explains that it is designed specifically to stress the
> > compaction code. I have the test up and running now and hope to
> > collect results over the weekend.
>
> I did the testing, but a awkward is that I failed to get result,
> maybe leak some packages.

So one thing I noticed is that if you have over 128GB of memory in the
system it will fail unless you update the sysctl value
vm.max_map_count. It defaulted to somewhere close to 64K, and I
increased it 20X to 1280K in order for the test to run without failing
on the mmap calls. The other edit I had to make was the config file as
the test system I was on had about 1TB of RAM, and my home partition
only had about 800GB to spare so I had to reduce the map size from
8/10 to 5/8.

> # ../../compare-kernels.sh
>
> thpscale Fault Latencies
> Can't locate List/BinarySearch.pm in @INC (@INC contains: /root/mmtests/bin/lib /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vend.
> BEGIN failed--compilation aborted at /root/mmtests/bin/lib/MMTests/Stat.pm line 13.
> Compilation failed in require at /root/mmtests/work/log/../../bin/compare-mmtests.pl line 13.
> BEGIN failed--compilation aborted at /root/mmtests/work/log/../../bin/compare-mmtests.pl line 13.

I had to install List::BinarySearch.pm. It required installing the
cpan perl libraries.

> >
> > There is one change I will probably make to this patch and that is to
> > place the new code that is setting skip_updated where the old code was
> > calling test_and_set_skip_bit. By doing that we can avoid extra checks
> > and it should help to reduce possible collisions when setting the skip
> > bit in the pageblock flags.
>
> the problem maybe on cmpchxg pb flags, that may involved other blocks changes.

That is the only thing I can think of just based on code review.
Although that would imply multiple compact threads are running, and as
I said in my tests I never saw kcompactd wakeup so I don't think the
tests you were mentioning were enough to stress compaction.