Re: [sched/fair] 0b0695f2b3: phoronix-test-suite.compress-gzip.0.seconds 19.8% regression

From: Vincent Guittot
Date: Wed May 20 2020 - 09:05:08 EST


On Thu, 14 May 2020 at 19:09, Vincent Guittot
<vincent.guittot@xxxxxxxxxx> wrote:
>
> Hi Oliver,
>
> On Thu, 14 May 2020 at 16:05, kernel test robot <oliver.sang@xxxxxxxxx> wrote:
> >
> > Hi Vincent Guittot,
> >
> > Below report FYI.
> > Last year, we actually reported an improvement "[sched/fair] 0b0695f2b3:
> > vm-scalability.median 3.1% improvement" on link [1].
> > but now we found the regression on pts.compress-gzip.
> > This seems align with what showed in "[v4,00/10] sched/fair: rework the CFS
> > load balance" (link [2]), where showed the reworked load balance could have
> > both positive and negative effect for different test suites.
>
> We have tried to run all possible use cases but it's impossible to
> covers all so there were a possibility that one that is not covered,
> would regressed.
>
> > And also from link [3], the patch set risks regressions.
> >
> > We also confirmed this regression on another platform
> > (Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory),
> > below is the data (lower is better).
> > v5.4 4.1
> > fcf0553db6f4c79387864f6e4ab4a891601f395e 4.01
> > 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 4.89
> > v5.5 5.18
> > v5.6 4.62
> > v5.7-rc2 4.53
> > v5.7-rc3 4.59
> >
> > It seems there are some recovery on latest kernels, but not fully back.
> > We were just wondering whether you could share some lights the further works
> > on the load balance after patch set [2] which could cause the performance
> > change?
> > And whether you have plan to refine the load balance algorithm further?
>
> I'm going to have a look at your regression to understand what is
> going wrong and how it can be fixed

I have run the benchmark on my local setups to try to reproduce the
regression and I don't see the regression. But my setups are different
from your so it might be a problem specific to yours

After analysing the benchmark, it doesn't overload the system and is
mainly based on 1 main gzip thread with few others waking up and
sleeping around.

I thought that scheduler could be too aggressive when trying to
balance the threads on your system, which could generate more task
migrations and impact the performance. But this doesn't seem to be the
case because perf-stat.i.cpu-migrations is -8%. On the other side, the
context switch is +16% and more interestingly idle state C1E and C6
usages increase more than 50%. I don't know if we can rely or this
value or not but I wonder if it could be that threads are now spread
on different CPUs which generates idle time on the busy CPUs but the
added time to enter/leave these states hurts the performance.

Could you make some traces of both kernels ? Tracing sched events
should be enough to understand the behavior

Regards,
Vincent

>
> Thanks
> Vincent
>
> > thanks
> >
> > [1] https://lists.01.org/hyperkitty/list/lkp@xxxxxxxxxxxx/thread/SANC7QLYZKUNMM6O7UNR3OAQAKS5BESE/
> > [2] https://lore.kernel.org/patchwork/cover/1141687/
> > [3] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.5-Scheduler
> >
> >
> >
> > Below is the detail regression report FYI.
> >
> > Greeting,
> >
> > FYI, we noticed a 19.8% regression of phoronix-test-suite.compress-gzip.0.seconds due to commit:
> >
> >
> > commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >
> > in testcase: phoronix-test-suite
> > on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
> > with following parameters:
> >
> > test: compress-gzip-1.2.0
> > cpufreq_governor: performance
> > ucode: 0xca
> >
> > test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
> > test-url: http://www.phoronix-test-suite.com/
> >
> > In addition to that, the commit also has significant impact on the following tests:
> >
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | phoronix-test-suite: |
> > | test machine | 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory |
> > | test parameters | cpufreq_governor=performance |
> > | | test=compress-gzip-1.2.0 |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | vm-scalability: vm-scalability.median 3.1% improvement |
> > | test machine | 104 threads Skylake with 192G memory |
> > | test parameters | cpufreq_governor=performance |
> > | | runtime=300s |
> > | | size=8T |
> > | | test=anon-cow-seq |
> > | | ucode=0x2000064 |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.fault.ops_per_sec -23.1% regression |
> > | test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters | class=scheduler |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | nr_threads=100% |
> > | | sc_pid_max=4194304 |
> > | | testtime=1s |
> > | | ucode=0xb000038 |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec -33.3% regression |
> > | test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
> > | test parameters | class=interrupt |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | nr_threads=100% |
> > | | testtime=1s |
> > | | ucode=0x500002c |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 42.3% improvement |
> > | test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters | class=interrupt |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | nr_threads=100% |
> > | | testtime=30s |
> > | | ucode=0xb000038 |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 35.1% improvement |
> > | test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters | class=interrupt |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | nr_threads=100% |
> > | | testtime=1s |
> > | | ucode=0xb000038 |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.ioprio.ops_per_sec -20.7% regression |
> > | test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
> > | test parameters | class=os |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | fs=ext4 |
> > | | nr_threads=100% |
> > | | testtime=1s |
> > | | ucode=0x500002b |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement |
> > | test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters | class=interrupt |
> > | | cpufreq_governor=performance |
> > | | disk=1HDD |
> > | | nr_threads=100% |
> > | | testtime=30s |
> > | | ucode=0xb000038 |
> > +------------------+-----------------------------------------------------------------------+
> >
> >
> > If you fix the issue, kindly add following tag
> > Reported-by: kernel test robot <oliver.sang@xxxxxxxxx>
> >
> >
> > Details are as below:
> > -------------------------------------------------------------------------------------------------->
> >
> >
> > To reproduce:
> >
> > git clone https://github.com/intel/lkp-tests.git
> > cd lkp-tests
> > bin/lkp install job.yaml # job file is attached in this email
> > bin/lkp run job.yaml
> >
> > =========================================================================================
> > compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/ucode:
> > gcc-7/performance/x86_64-lck-7983/clear-x86_64-phoronix-30140/lkp-cfl-e1/compress-gzip-1.2.0/phoronix-test-suite/0xca
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > :4 4% 0:7 perf-profile.children.cycles-pp.error_entry
> > %stddev %change %stddev
> > \ | \
> > 6.01 +19.8% 7.20 phoronix-test-suite.compress-gzip.0.seconds
> > 147.57 Ä 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time
> > 147.57 Ä 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time.max
> > 52926 Ä 8% -23.8% 40312 meminfo.max_used_kB
> > 0.11 Ä 7% -0.0 0.09 Ä 3% mpstat.cpu.all.soft%
> > 242384 -1.4% 238931 proc-vmstat.nr_inactive_anon
> > 242384 -1.4% 238931 proc-vmstat.nr_zone_inactive_anon
> > 1.052e+08 Ä 27% +56.5% 1.647e+08 Ä 10% cpuidle.C1E.time
> > 1041078 Ä 22% +54.7% 1610786 Ä 7% cpuidle.C1E.usage
> > 3.414e+08 Ä 6% +57.6% 5.381e+08 Ä 28% cpuidle.C6.time
> > 817897 Ä 3% +50.1% 1227607 Ä 11% cpuidle.C6.usage
> > 2884 -4.2% 2762 turbostat.Avg_MHz
> > 1041024 Ä 22% +54.7% 1610657 Ä 7% turbostat.C1E
> > 817802 Ä 3% +50.1% 1227380 Ä 11% turbostat.C6
> > 66.75 -2.0% 65.42 turbostat.CorWatt
> > 67.28 -2.0% 65.94 turbostat.PkgWatt
> > 32.50 +6.2% 34.50 vmstat.cpu.id
> > 62.50 -2.4% 61.00 vmstat.cpu.us
> > 2443 Ä 2% -28.9% 1738 Ä 2% vmstat.io.bi
> > 23765 Ä 4% +16.5% 27685 vmstat.system.cs
> > 37860 -7.1% 35180 Ä 2% vmstat.system.in
> > 3.474e+09 Ä 3% -12.7% 3.032e+09 perf-stat.i.branch-instructions
> > 1.344e+08 Ä 2% -11.6% 1.188e+08 perf-stat.i.branch-misses
> > 13033225 Ä 4% -19.0% 10561032 perf-stat.i.cache-misses
> > 5.105e+08 Ä 3% -15.3% 4.322e+08 perf-stat.i.cache-references
> > 24205 Ä 4% +16.3% 28161 perf-stat.i.context-switches
> > 30.25 Ä 2% +39.7% 42.27 Ä 2% perf-stat.i.cpi
> > 4.63e+10 -4.7% 4.412e+10 perf-stat.i.cpu-cycles
> > 3147 Ä 4% -8.4% 2882 Ä 2% perf-stat.i.cpu-migrations
> > 16724 Ä 2% +45.9% 24406 Ä 5% perf-stat.i.cycles-between-cache-misses
> > 0.18 Ä 13% -0.1 0.12 Ä 4% perf-stat.i.dTLB-load-miss-rate%
> > 4.822e+09 Ä 3% -11.9% 4.248e+09 perf-stat.i.dTLB-loads
> > 0.07 Ä 8% -0.0 0.05 Ä 16% perf-stat.i.dTLB-store-miss-rate%
> > 1.623e+09 Ä 2% -11.5% 1.436e+09 perf-stat.i.dTLB-stores
> > 1007120 Ä 3% -8.9% 917854 Ä 2% perf-stat.i.iTLB-load-misses
> > 1.816e+10 Ä 3% -12.2% 1.594e+10 perf-stat.i.instructions
> > 2.06 Ä 54% -66.0% 0.70 perf-stat.i.major-faults
> > 29896 Ä 13% -35.2% 19362 Ä 8% perf-stat.i.minor-faults
> > 0.00 Ä 9% -0.0 0.00 Ä 6% perf-stat.i.node-load-miss-rate%
> > 1295134 Ä 3% -14.2% 1111173 perf-stat.i.node-loads
> > 3064949 Ä 4% -18.7% 2491063 Ä 2% perf-stat.i.node-stores
> > 29898 Ä 13% -35.2% 19363 Ä 8% perf-stat.i.page-faults
> > 28.10 -3.5% 27.12 perf-stat.overall.MPKI
> > 2.55 -0.1 2.44 Ä 2% perf-stat.overall.cache-miss-rate%
> > 2.56 Ä 3% +8.5% 2.77 perf-stat.overall.cpi
> > 3567 Ä 5% +17.3% 4186 perf-stat.overall.cycles-between-cache-misses
> > 0.02 Ä 3% +0.0 0.02 Ä 3% perf-stat.overall.dTLB-load-miss-rate%
> > 18031 -3.6% 17375 Ä 2% perf-stat.overall.instructions-per-iTLB-miss
> > 0.39 Ä 3% -7.9% 0.36 perf-stat.overall.ipc
> > 3.446e+09 Ä 3% -12.6% 3.011e+09 perf-stat.ps.branch-instructions
> > 1.333e+08 Ä 2% -11.5% 1.18e+08 perf-stat.ps.branch-misses
> > 12927998 Ä 4% -18.8% 10491818 perf-stat.ps.cache-misses
> > 5.064e+08 Ä 3% -15.2% 4.293e+08 perf-stat.ps.cache-references
> > 24011 Ä 4% +16.5% 27973 perf-stat.ps.context-switches
> > 4.601e+10 -4.6% 4.391e+10 perf-stat.ps.cpu-cycles
> > 3121 Ä 4% -8.3% 2863 Ä 2% perf-stat.ps.cpu-migrations
> > 4.783e+09 Ä 3% -11.8% 4.219e+09 perf-stat.ps.dTLB-loads
> > 1.61e+09 Ä 2% -11.4% 1.426e+09 perf-stat.ps.dTLB-stores
> > 999100 Ä 3% -8.7% 911974 Ä 2% perf-stat.ps.iTLB-load-misses
> > 1.802e+10 Ä 3% -12.1% 1.584e+10 perf-stat.ps.instructions
> > 2.04 Ä 54% -65.9% 0.70 perf-stat.ps.major-faults
> > 29656 Ä 13% -35.1% 19237 Ä 8% perf-stat.ps.minor-faults
> > 1284601 Ä 3% -14.1% 1103823 perf-stat.ps.node-loads
> > 3039931 Ä 4% -18.6% 2474451 Ä 2% perf-stat.ps.node-stores
> > 29658 Ä 13% -35.1% 19238 Ä 8% perf-stat.ps.page-faults
> > 50384 Ä 2% +16.5% 58713 Ä 4% softirqs.CPU0.RCU
> > 33143 Ä 2% +19.9% 39731 Ä 2% softirqs.CPU0.SCHED
> > 72672 +24.0% 90109 softirqs.CPU0.TIMER
> > 22182 Ä 4% +26.3% 28008 Ä 4% softirqs.CPU1.SCHED
> > 74465 Ä 4% +26.3% 94027 Ä 3% softirqs.CPU1.TIMER
> > 18680 Ä 7% +29.2% 24135 Ä 3% softirqs.CPU10.SCHED
> > 75941 Ä 2% +21.8% 92486 Ä 7% softirqs.CPU10.TIMER
> > 48991 Ä 4% +22.7% 60105 Ä 5% softirqs.CPU11.RCU
> > 18666 Ä 6% +28.4% 23976 Ä 4% softirqs.CPU11.SCHED
> > 74896 Ä 6% +24.4% 93173 Ä 3% softirqs.CPU11.TIMER
> > 49490 +20.5% 59659 Ä 2% softirqs.CPU12.RCU
> > 18973 Ä 7% +26.0% 23909 Ä 3% softirqs.CPU12.SCHED
> > 50620 +19.9% 60677 Ä 6% softirqs.CPU13.RCU
> > 19136 Ä 6% +23.2% 23577 Ä 4% softirqs.CPU13.SCHED
> > 74812 +33.3% 99756 Ä 7% softirqs.CPU13.TIMER
> > 50824 +15.9% 58881 Ä 3% softirqs.CPU14.RCU
> > 19550 Ä 5% +24.1% 24270 Ä 4% softirqs.CPU14.SCHED
> > 76801 +22.8% 94309 Ä 4% softirqs.CPU14.TIMER
> > 51844 +11.5% 57795 Ä 3% softirqs.CPU15.RCU
> > 19204 Ä 8% +28.4% 24662 Ä 2% softirqs.CPU15.SCHED
> > 74751 +29.9% 97127 Ä 3% softirqs.CPU15.TIMER
> > 50307 +17.4% 59062 Ä 4% softirqs.CPU2.RCU
> > 22150 +12.2% 24848 softirqs.CPU2.SCHED
> > 79653 Ä 2% +21.6% 96829 Ä 10% softirqs.CPU2.TIMER
> > 50833 +21.1% 61534 Ä 4% softirqs.CPU3.RCU
> > 18935 Ä 2% +32.0% 25002 Ä 3% softirqs.CPU3.SCHED
> > 50569 +15.8% 58570 Ä 4% softirqs.CPU4.RCU
> > 20509 Ä 5% +18.3% 24271 softirqs.CPU4.SCHED
> > 80942 Ä 2% +15.3% 93304 Ä 3% softirqs.CPU4.TIMER
> > 50692 +16.5% 59067 Ä 4% softirqs.CPU5.RCU
> > 20237 Ä 3% +18.2% 23914 Ä 3% softirqs.CPU5.SCHED
> > 78963 +21.8% 96151 Ä 2% softirqs.CPU5.TIMER
> > 19709 Ä 7% +20.1% 23663 softirqs.CPU6.SCHED
> > 81250 +15.9% 94185 softirqs.CPU6.TIMER
> > 51379 +15.0% 59108 softirqs.CPU7.RCU
> > 19642 Ä 5% +28.4% 25227 Ä 3% softirqs.CPU7.SCHED
> > 78299 Ä 2% +30.3% 102021 Ä 4% softirqs.CPU7.TIMER
> > 49723 +19.0% 59169 Ä 4% softirqs.CPU8.RCU
> > 20138 Ä 6% +21.7% 24501 Ä 2% softirqs.CPU8.SCHED
> > 75256 Ä 3% +22.8% 92419 Ä 2% softirqs.CPU8.TIMER
> > 50406 Ä 2% +17.4% 59178 Ä 4% softirqs.CPU9.RCU
> > 19182 Ä 9% +24.2% 23831 Ä 6% softirqs.CPU9.SCHED
> > 73572 Ä 5% +30.4% 95951 Ä 8% softirqs.CPU9.TIMER
> > 812363 +16.6% 946858 Ä 3% softirqs.RCU
> > 330042 Ä 4% +23.5% 407533 softirqs.SCHED
> > 1240046 +22.5% 1519539 softirqs.TIMER
> > 251015 Ä 21% -84.2% 39587 Ä106% sched_debug.cfs_rq:/.MIN_vruntime.avg
> > 537847 Ä 4% -44.8% 297100 Ä 66% sched_debug.cfs_rq:/.MIN_vruntime.max
> > 257990 Ä 5% -63.4% 94515 Ä 82% sched_debug.cfs_rq:/.MIN_vruntime.stddev
> > 38935 +47.9% 57601 sched_debug.cfs_rq:/.exec_clock.avg
> > 44119 +40.6% 62013 sched_debug.cfs_rq:/.exec_clock.max
> > 37622 +49.9% 56404 sched_debug.cfs_rq:/.exec_clock.min
> > 47287 Ä 7% -70.3% 14036 Ä 88% sched_debug.cfs_rq:/.load.min
> > 67.17 -52.9% 31.62 Ä 31% sched_debug.cfs_rq:/.load_avg.min
> > 251015 Ä 21% -84.2% 39588 Ä106% sched_debug.cfs_rq:/.max_vruntime.avg
> > 537847 Ä 4% -44.8% 297103 Ä 66% sched_debug.cfs_rq:/.max_vruntime.max
> > 257991 Ä 5% -63.4% 94516 Ä 82% sched_debug.cfs_rq:/.max_vruntime.stddev
> > 529078 Ä 3% +45.2% 768398 sched_debug.cfs_rq:/.min_vruntime.avg
> > 547175 Ä 2% +44.1% 788582 sched_debug.cfs_rq:/.min_vruntime.max
> > 496420 +48.3% 736148 Ä 2% sched_debug.cfs_rq:/.min_vruntime.min
> > 1.33 Ä 15% -44.0% 0.75 Ä 32% sched_debug.cfs_rq:/.nr_running.avg
> > 0.83 Ä 20% -70.0% 0.25 Ä 70% sched_debug.cfs_rq:/.nr_running.min
> > 0.54 Ä 8% -15.9% 0.45 Ä 7% sched_debug.cfs_rq:/.nr_running.stddev
> > 0.33 +62.9% 0.54 Ä 8% sched_debug.cfs_rq:/.nr_spread_over.avg
> > 1.33 +54.7% 2.06 Ä 17% sched_debug.cfs_rq:/.nr_spread_over.max
> > 0.44 Ä 7% +56.4% 0.69 Ä 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
> > 130.83 Ä 14% -25.6% 97.37 Ä 15% sched_debug.cfs_rq:/.runnable_load_avg.avg
> > 45.33 Ä 6% -79.3% 9.38 Ä 70% sched_debug.cfs_rq:/.runnable_load_avg.min
> > 47283 Ä 7% -70.9% 13741 Ä 89% sched_debug.cfs_rq:/.runnable_weight.min
> > 1098 Ä 8% -27.6% 795.02 Ä 24% sched_debug.cfs_rq:/.util_avg.avg
> > 757.50 Ä 9% -51.3% 369.25 Ä 10% sched_debug.cfs_rq:/.util_avg.min
> > 762.39 Ä 11% -44.4% 424.04 Ä 46% sched_debug.cfs_rq:/.util_est_enqueued.avg
> > 314.00 Ä 18% -78.5% 67.38 Ä100% sched_debug.cfs_rq:/.util_est_enqueued.min
> > 142951 Ä 5% +22.8% 175502 Ä 3% sched_debug.cpu.avg_idle.avg
> > 72112 -18.3% 58937 Ä 13% sched_debug.cpu.avg_idle.stddev
> > 127638 Ä 7% +39.3% 177858 Ä 5% sched_debug.cpu.clock.avg
> > 127643 Ä 7% +39.3% 177862 Ä 5% sched_debug.cpu.clock.max
> > 127633 Ä 7% +39.3% 177855 Ä 5% sched_debug.cpu.clock.min
> > 126720 Ä 7% +39.4% 176681 Ä 5% sched_debug.cpu.clock_task.avg
> > 127168 Ä 7% +39.3% 177179 Ä 5% sched_debug.cpu.clock_task.max
> > 125240 Ä 7% +39.5% 174767 Ä 5% sched_debug.cpu.clock_task.min
> > 563.60 Ä 2% +25.9% 709.78 Ä 9% sched_debug.cpu.clock_task.stddev
> > 1.66 Ä 18% -37.5% 1.04 Ä 32% sched_debug.cpu.nr_running.avg
> > 0.83 Ä 20% -62.5% 0.31 Ä 87% sched_debug.cpu.nr_running.min
> > 127617 Ä 3% +52.9% 195080 sched_debug.cpu.nr_switches.avg
> > 149901 Ä 6% +45.2% 217652 sched_debug.cpu.nr_switches.max
> > 108182 Ä 5% +61.6% 174808 sched_debug.cpu.nr_switches.min
> > 0.20 Ä 5% -62.5% 0.07 Ä 67% sched_debug.cpu.nr_uninterruptible.avg
> > -29.33 -13.5% -25.38 sched_debug.cpu.nr_uninterruptible.min
> > 92666 Ä 8% +66.8% 154559 sched_debug.cpu.sched_count.avg
> > 104565 Ä 11% +57.2% 164374 sched_debug.cpu.sched_count.max
> > 80272 Ä 10% +77.2% 142238 sched_debug.cpu.sched_count.min
> > 38029 Ä 10% +80.4% 68608 sched_debug.cpu.sched_goidle.avg
> > 43413 Ä 11% +68.5% 73149 sched_debug.cpu.sched_goidle.max
> > 32420 Ä 11% +94.5% 63069 sched_debug.cpu.sched_goidle.min
> > 51567 Ä 8% +60.7% 82878 sched_debug.cpu.ttwu_count.avg
> > 79015 Ä 9% +45.2% 114717 Ä 4% sched_debug.cpu.ttwu_count.max
> > 42919 Ä 9% +63.3% 70086 sched_debug.cpu.ttwu_count.min
> > 127632 Ä 7% +39.3% 177854 Ä 5% sched_debug.cpu_clk
> > 125087 Ä 7% +40.1% 175285 Ä 5% sched_debug.ktime
> > 127882 Ä 6% +39.3% 178163 Ä 5% sched_debug.sched_clk
> > 146.00 Ä 13% +902.9% 1464 Ä143% interrupts.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
> > 3375 Ä 93% -94.8% 174.75 Ä 26% interrupts.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
> > 297595 Ä 8% +22.8% 365351 Ä 2% interrupts.CPU0.LOC:Local_timer_interrupts
> > 8402 -21.7% 6577 Ä 25% interrupts.CPU0.NMI:Non-maskable_interrupts
> > 8402 -21.7% 6577 Ä 25% interrupts.CPU0.PMI:Performance_monitoring_interrupts
> > 937.00 Ä 2% +18.1% 1106 Ä 11% interrupts.CPU0.RES:Rescheduling_interrupts
> > 146.00 Ä 13% +902.9% 1464 Ä143% interrupts.CPU1.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
> > 297695 Ä 8% +22.7% 365189 Ä 2% interrupts.CPU1.LOC:Local_timer_interrupts
> > 8412 -20.9% 6655 Ä 25% interrupts.CPU1.NMI:Non-maskable_interrupts
> > 8412 -20.9% 6655 Ä 25% interrupts.CPU1.PMI:Performance_monitoring_interrupts
> > 297641 Ä 8% +22.7% 365268 Ä 2% interrupts.CPU10.LOC:Local_timer_interrupts
> > 8365 -10.9% 7455 Ä 3% interrupts.CPU10.NMI:Non-maskable_interrupts
> > 8365 -10.9% 7455 Ä 3% interrupts.CPU10.PMI:Performance_monitoring_interrupts
> > 297729 Ä 8% +22.7% 365238 Ä 2% interrupts.CPU11.LOC:Local_timer_interrupts
> > 8376 -21.8% 6554 Ä 26% interrupts.CPU11.NMI:Non-maskable_interrupts
> > 8376 -21.8% 6554 Ä 26% interrupts.CPU11.PMI:Performance_monitoring_interrupts
> > 297394 Ä 8% +22.8% 365274 Ä 2% interrupts.CPU12.LOC:Local_timer_interrupts
> > 8393 -10.5% 7512 Ä 3% interrupts.CPU12.NMI:Non-maskable_interrupts
> > 8393 -10.5% 7512 Ä 3% interrupts.CPU12.PMI:Performance_monitoring_interrupts
> > 297744 Ä 8% +22.7% 365243 Ä 2% interrupts.CPU13.LOC:Local_timer_interrupts
> > 8353 -10.6% 7469 Ä 3% interrupts.CPU13.NMI:Non-maskable_interrupts
> > 8353 -10.6% 7469 Ä 3% interrupts.CPU13.PMI:Performance_monitoring_interrupts
> > 148.50 Ä 17% -24.2% 112.50 Ä 8% interrupts.CPU13.TLB:TLB_shootdowns
> > 297692 Ä 8% +22.7% 365311 Ä 2% interrupts.CPU14.LOC:Local_timer_interrupts
> > 8374 -10.4% 7501 Ä 4% interrupts.CPU14.NMI:Non-maskable_interrupts
> > 8374 -10.4% 7501 Ä 4% interrupts.CPU14.PMI:Performance_monitoring_interrupts
> > 297453 Ä 8% +22.8% 365311 Ä 2% interrupts.CPU15.LOC:Local_timer_interrupts
> > 8336 -22.8% 6433 Ä 26% interrupts.CPU15.NMI:Non-maskable_interrupts
> > 8336 -22.8% 6433 Ä 26% interrupts.CPU15.PMI:Performance_monitoring_interrupts
> > 699.50 Ä 21% +51.3% 1058 Ä 10% interrupts.CPU15.RES:Rescheduling_interrupts
> > 3375 Ä 93% -94.8% 174.75 Ä 26% interrupts.CPU2.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
> > 297685 Ä 8% +22.7% 365273 Ä 2% interrupts.CPU2.LOC:Local_timer_interrupts
> > 8357 -21.2% 6584 Ä 25% interrupts.CPU2.NMI:Non-maskable_interrupts
> > 8357 -21.2% 6584 Ä 25% interrupts.CPU2.PMI:Performance_monitoring_interrupts
> > 164.00 Ä 30% -23.0% 126.25 Ä 32% interrupts.CPU2.TLB:TLB_shootdowns
> > 297352 Ä 8% +22.9% 365371 Ä 2% interrupts.CPU3.LOC:Local_timer_interrupts
> > 8383 -10.6% 7493 Ä 4% interrupts.CPU3.NMI:Non-maskable_interrupts
> > 8383 -10.6% 7493 Ä 4% interrupts.CPU3.PMI:Performance_monitoring_interrupts
> > 780.50 Ä 3% +32.7% 1035 Ä 6% interrupts.CPU3.RES:Rescheduling_interrupts
> > 297595 Ä 8% +22.8% 365415 Ä 2% interrupts.CPU4.LOC:Local_timer_interrupts
> > 8382 -21.4% 6584 Ä 25% interrupts.CPU4.NMI:Non-maskable_interrupts
> > 8382 -21.4% 6584 Ä 25% interrupts.CPU4.PMI:Performance_monitoring_interrupts
> > 297720 Ä 8% +22.7% 365347 Ä 2% interrupts.CPU5.LOC:Local_timer_interrupts
> > 8353 -32.0% 5679 Ä 34% interrupts.CPU5.NMI:Non-maskable_interrupts
> > 8353 -32.0% 5679 Ä 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
> > 727.00 Ä 16% +98.3% 1442 Ä 47% interrupts.CPU5.RES:Rescheduling_interrupts
> > 297620 Ä 8% +22.8% 365343 Ä 2% interrupts.CPU6.LOC:Local_timer_interrupts
> > 8388 -10.3% 7526 Ä 4% interrupts.CPU6.NMI:Non-maskable_interrupts
> > 8388 -10.3% 7526 Ä 4% interrupts.CPU6.PMI:Performance_monitoring_interrupts
> > 156.50 Ä 3% -27.6% 113.25 Ä 16% interrupts.CPU6.TLB:TLB_shootdowns
> > 297690 Ä 8% +22.7% 365363 Ä 2% interrupts.CPU7.LOC:Local_timer_interrupts
> > 8390 -23.1% 6449 Ä 25% interrupts.CPU7.NMI:Non-maskable_interrupts
> > 8390 -23.1% 6449 Ä 25% interrupts.CPU7.PMI:Performance_monitoring_interrupts
> > 918.00 Ä 16% +49.4% 1371 Ä 7% interrupts.CPU7.RES:Rescheduling_interrupts
> > 120.00 Ä 35% +70.8% 205.00 Ä 17% interrupts.CPU7.TLB:TLB_shootdowns
> > 297731 Ä 8% +22.7% 365368 Ä 2% interrupts.CPU8.LOC:Local_timer_interrupts
> > 8393 -32.5% 5668 Ä 35% interrupts.CPU8.NMI:Non-maskable_interrupts
> > 8393 -32.5% 5668 Ä 35% interrupts.CPU8.PMI:Performance_monitoring_interrupts
> > 297779 Ä 8% +22.7% 365399 Ä 2% interrupts.CPU9.LOC:Local_timer_interrupts
> > 8430 -10.8% 7517 Ä 2% interrupts.CPU9.NMI:Non-maskable_interrupts
> > 8430 -10.8% 7517 Ä 2% interrupts.CPU9.PMI:Performance_monitoring_interrupts
> > 956.50 +13.5% 1085 Ä 4% interrupts.CPU9.RES:Rescheduling_interrupts
> > 4762118 Ä 8% +22.7% 5845069 Ä 2% interrupts.LOC:Local_timer_interrupts
> > 134093 -18.2% 109662 Ä 11% interrupts.NMI:Non-maskable_interrupts
> > 134093 -18.2% 109662 Ä 11% interrupts.PMI:Performance_monitoring_interrupts
> > 66.97 Ä 9% -29.9 37.12 Ä 49% perf-profile.calltrace.cycles-pp.deflate
> > 66.67 Ä 9% -29.7 36.97 Ä 50% perf-profile.calltrace.cycles-pp.deflate_medium.deflate
> > 43.24 Ä 9% -18.6 24.61 Ä 49% perf-profile.calltrace.cycles-pp.longest_match.deflate_medium.deflate
> > 2.29 Ä 14% -1.2 1.05 Ä 58% perf-profile.calltrace.cycles-pp.deflateSetDictionary
> > 0.74 Ä 6% -0.5 0.27 Ä100% perf-profile.calltrace.cycles-pp.read.__libc_start_main
> > 0.74 Ä 7% -0.5 0.27 Ä100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> > 0.73 Ä 7% -0.5 0.27 Ä100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> > 0.73 Ä 7% -0.5 0.27 Ä100% perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> > 0.73 Ä 7% -0.5 0.27 Ä100% perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
> > 0.26 Ä100% +0.6 0.88 Ä 45% perf-profile.calltrace.cycles-pp.vfs_statx.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.34 Ä100% +0.7 1.02 Ä 42% perf-profile.calltrace.cycles-pp.do_sys_open.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.28 Ä100% +0.7 0.96 Ä 44% perf-profile.calltrace.cycles-pp.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.28 Ä100% +0.7 0.96 Ä 44% perf-profile.calltrace.cycles-pp.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.34 Ä100% +0.7 1.03 Ä 42% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.00 +0.8 0.77 Ä 35% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
> > 0.56 Ä 9% +0.8 1.40 Ä 45% perf-profile.calltrace.cycles-pp.__schedule.schedule.futex_wait_queue_me.futex_wait.do_futex
> > 0.58 Ä 10% +0.9 1.43 Ä 45% perf-profile.calltrace.cycles-pp.schedule.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
> > 0.33 Ä100% +0.9 1.21 Ä 28% perf-profile.calltrace.cycles-pp.menu_select.cpuidle_select.do_idle.cpu_startup_entry.start_secondary
> > 0.34 Ä 99% +0.9 1.27 Ä 30% perf-profile.calltrace.cycles-pp.cpuidle_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 0.00 +1.0 0.96 Ä 62% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> > 0.62 Ä 9% +1.0 1.60 Ä 52% perf-profile.calltrace.cycles-pp.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64
> > 0.68 Ä 10% +1.0 1.73 Ä 51% perf-profile.calltrace.cycles-pp.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.46 Ä100% +1.1 1.60 Ä 43% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
> > 0.47 Ä100% +1.2 1.62 Ä 43% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> > 17.73 Ä 21% +19.1 36.84 Ä 25% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> > 17.75 Ä 20% +19.9 37.63 Ä 26% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
> > 17.84 Ä 20% +20.0 37.82 Ä 26% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 18.83 Ä 20% +21.2 40.05 Ä 27% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> > 18.89 Ä 20% +21.2 40.11 Ä 27% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
> > 18.89 Ä 20% +21.2 40.12 Ä 27% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
> > 20.14 Ä 20% +22.5 42.66 Ä 27% perf-profile.calltrace.cycles-pp.secondary_startup_64
> > 66.97 Ä 9% -29.9 37.12 Ä 49% perf-profile.children.cycles-pp.deflate
> > 66.83 Ä 9% -29.8 37.06 Ä 49% perf-profile.children.cycles-pp.deflate_medium
> > 43.58 Ä 9% -18.8 24.80 Ä 49% perf-profile.children.cycles-pp.longest_match
> > 2.29 Ä 14% -1.2 1.12 Ä 43% perf-profile.children.cycles-pp.deflateSetDictionary
> > 0.84 -0.3 0.58 Ä 19% perf-profile.children.cycles-pp.read
> > 0.52 Ä 13% -0.2 0.27 Ä 43% perf-profile.children.cycles-pp.fill_window
> > 0.06 +0.0 0.08 Ä 13% perf-profile.children.cycles-pp.update_stack_state
> > 0.07 Ä 14% +0.0 0.11 Ä 24% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
> > 0.03 Ä100% +0.1 0.09 Ä 19% perf-profile.children.cycles-pp.find_next_and_bit
> > 0.00 +0.1 0.06 Ä 15% perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
> > 0.03 Ä100% +0.1 0.08 Ä 33% perf-profile.children.cycles-pp.free_pcppages_bulk
> > 0.07 Ä 7% +0.1 0.12 Ä 36% perf-profile.children.cycles-pp.syscall_return_via_sysret
> > 0.00 +0.1 0.06 Ä 28% perf-profile.children.cycles-pp.rb_erase
> > 0.03 Ä100% +0.1 0.09 Ä 24% perf-profile.children.cycles-pp.shmem_undo_range
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.unlinkat
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.__x64_sys_unlinkat
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.do_unlinkat
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.ovl_destroy_inode
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.shmem_evict_inode
> > 0.03 Ä100% +0.1 0.09 Ä 28% perf-profile.children.cycles-pp.shmem_truncate_range
> > 0.05 +0.1 0.12 Ä 38% perf-profile.children.cycles-pp.unmap_vmas
> > 0.00 +0.1 0.07 Ä 30% perf-profile.children.cycles-pp.timerqueue_del
> > 0.00 +0.1 0.07 Ä 26% perf-profile.children.cycles-pp.idle_cpu
> > 0.09 Ä 17% +0.1 0.15 Ä 19% perf-profile.children.cycles-pp.__update_load_avg_se
> > 0.00 +0.1 0.07 Ä 33% perf-profile.children.cycles-pp.unmap_region
> > 0.00 +0.1 0.07 Ä 33% perf-profile.children.cycles-pp.__alloc_fd
> > 0.03 Ä100% +0.1 0.10 Ä 31% perf-profile.children.cycles-pp.destroy_inode
> > 0.03 Ä100% +0.1 0.10 Ä 30% perf-profile.children.cycles-pp.evict
> > 0.06 Ä 16% +0.1 0.13 Ä 35% perf-profile.children.cycles-pp.ovl_override_creds
> > 0.00 +0.1 0.07 Ä 26% perf-profile.children.cycles-pp.kernel_text_address
> > 0.00 +0.1 0.07 Ä 41% perf-profile.children.cycles-pp.file_remove_privs
> > 0.07 Ä 23% +0.1 0.14 Ä 47% perf-profile.children.cycles-pp.hrtimer_next_event_without
> > 0.03 Ä100% +0.1 0.10 Ä 24% perf-profile.children.cycles-pp.__dentry_kill
> > 0.03 Ä100% +0.1 0.10 Ä 29% perf-profile.children.cycles-pp.dentry_unlink_inode
> > 0.03 Ä100% +0.1 0.10 Ä 29% perf-profile.children.cycles-pp.iput
> > 0.03 Ä100% +0.1 0.10 Ä 54% perf-profile.children.cycles-pp.__close_fd
> > 0.08 Ä 25% +0.1 0.15 Ä 35% perf-profile.children.cycles-pp.__switch_to
> > 0.00 +0.1 0.07 Ä 29% perf-profile.children.cycles-pp.__switch_to_asm
> > 0.00 +0.1 0.08 Ä 24% perf-profile.children.cycles-pp.__kernel_text_address
> > 0.03 Ä100% +0.1 0.11 Ä 51% perf-profile.children.cycles-pp.enqueue_hrtimer
> > 0.03 Ä100% +0.1 0.11 Ä 33% perf-profile.children.cycles-pp.rcu_gp_kthread_wake
> > 0.03 Ä100% +0.1 0.11 Ä 33% perf-profile.children.cycles-pp.swake_up_one
> > 0.03 Ä100% +0.1 0.11 Ä 33% perf-profile.children.cycles-pp.swake_up_locked
> > 0.10 Ä 30% +0.1 0.18 Ä 17% perf-profile.children.cycles-pp.irqtime_account_irq
> > 0.03 Ä100% +0.1 0.11 Ä 38% perf-profile.children.cycles-pp.unmap_page_range
> > 0.00 +0.1 0.09 Ä 37% perf-profile.children.cycles-pp.putname
> > 0.03 Ä100% +0.1 0.11 Ä 28% perf-profile.children.cycles-pp.filemap_map_pages
> > 0.07 Ä 28% +0.1 0.16 Ä 35% perf-profile.children.cycles-pp.getname
> > 0.03 Ä100% +0.1 0.11 Ä 40% perf-profile.children.cycles-pp.unmap_single_vma
> > 0.08 Ä 29% +0.1 0.17 Ä 38% perf-profile.children.cycles-pp.queued_spin_lock_slowpath
> > 0.03 Ä100% +0.1 0.12 Ä 54% perf-profile.children.cycles-pp.setlocale
> > 0.03 Ä100% +0.1 0.12 Ä 60% perf-profile.children.cycles-pp.__open64_nocancel
> > 0.00 +0.1 0.09 Ä 31% perf-profile.children.cycles-pp.__hrtimer_get_next_event
> > 0.00 +0.1 0.10 Ä 28% perf-profile.children.cycles-pp.get_unused_fd_flags
> > 0.00 +0.1 0.10 Ä 65% perf-profile.children.cycles-pp.timerqueue_add
> > 0.07 Ä 28% +0.1 0.17 Ä 42% perf-profile.children.cycles-pp.__hrtimer_next_event_base
> > 0.03 Ä100% +0.1 0.12 Ä 51% perf-profile.children.cycles-pp.__x64_sys_close
> > 0.00 +0.1 0.10 Ä 38% perf-profile.children.cycles-pp.do_lookup_x
> > 0.03 Ä100% +0.1 0.12 Ä 23% perf-profile.children.cycles-pp.kmem_cache_free
> > 0.04 Ä100% +0.1 0.14 Ä 53% perf-profile.children.cycles-pp.__do_munmap
> > 0.00 +0.1 0.10 Ä 35% perf-profile.children.cycles-pp.unwind_get_return_address
> > 0.00 +0.1 0.10 Ä 49% perf-profile.children.cycles-pp.shmem_add_to_page_cache
> > 0.07 Ä 20% +0.1 0.18 Ä 25% perf-profile.children.cycles-pp.find_next_bit
> > 0.08 Ä 25% +0.1 0.18 Ä 34% perf-profile.children.cycles-pp.dput
> > 0.11 Ä 33% +0.1 0.21 Ä 37% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
> > 0.08 Ä 5% +0.1 0.19 Ä 27% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> > 0.00 +0.1 0.11 Ä 52% perf-profile.children.cycles-pp.rcu_idle_exit
> > 0.03 Ä100% +0.1 0.14 Ä 18% perf-profile.children.cycles-pp.entry_SYSCALL_64
> > 0.08 +0.1 0.19 Ä 43% perf-profile.children.cycles-pp.exit_mmap
> > 0.09 Ä 22% +0.1 0.20 Ä 57% perf-profile.children.cycles-pp.set_next_entity
> > 0.07 Ä 7% +0.1 0.18 Ä 60% perf-profile.children.cycles-pp.switch_mm_irqs_off
> > 0.10 Ä 26% +0.1 0.21 Ä 32% perf-profile.children.cycles-pp.sched_clock
> > 0.12 Ä 25% +0.1 0.23 Ä 39% perf-profile.children.cycles-pp.update_cfs_group
> > 0.07 Ä 14% +0.1 0.18 Ä 45% perf-profile.children.cycles-pp.lapic_next_deadline
> > 0.08 Ä 5% +0.1 0.20 Ä 44% perf-profile.children.cycles-pp.mmput
> > 0.11 Ä 18% +0.1 0.23 Ä 41% perf-profile.children.cycles-pp.clockevents_program_event
> > 0.07 Ä 7% +0.1 0.18 Ä 40% perf-profile.children.cycles-pp.strncpy_from_user
> > 0.00 +0.1 0.12 Ä 48% perf-profile.children.cycles-pp.flush_old_exec
> > 0.11 Ä 18% +0.1 0.23 Ä 37% perf-profile.children.cycles-pp.native_sched_clock
> > 0.09 Ä 17% +0.1 0.21 Ä 46% perf-profile.children.cycles-pp._dl_sysdep_start
> > 0.12 Ä 19% +0.1 0.26 Ä 37% perf-profile.children.cycles-pp.tick_program_event
> > 0.09 Ä 33% +0.1 0.23 Ä 61% perf-profile.children.cycles-pp.mmap_region
> > 0.14 Ä 21% +0.1 0.28 Ä 39% perf-profile.children.cycles-pp.sched_clock_cpu
> > 0.11 Ä 27% +0.1 0.25 Ä 56% perf-profile.children.cycles-pp.do_mmap
> > 0.11 Ä 36% +0.1 0.26 Ä 57% perf-profile.children.cycles-pp.ksys_mmap_pgoff
> > 0.09 Ä 17% +0.1 0.23 Ä 48% perf-profile.children.cycles-pp.lookup_fast
> > 0.04 Ä100% +0.2 0.19 Ä 48% perf-profile.children.cycles-pp.open_path
> > 0.11 Ä 30% +0.2 0.27 Ä 58% perf-profile.children.cycles-pp.vm_mmap_pgoff
> > 0.11 Ä 27% +0.2 0.28 Ä 37% perf-profile.children.cycles-pp.update_blocked_averages
> > 0.11 +0.2 0.29 Ä 38% perf-profile.children.cycles-pp.search_binary_handler
> > 0.11 Ä 4% +0.2 0.29 Ä 39% perf-profile.children.cycles-pp.load_elf_binary
> > 0.11 Ä 30% +0.2 0.30 Ä 50% perf-profile.children.cycles-pp.inode_permission
> > 0.04 Ä100% +0.2 0.24 Ä 55% perf-profile.children.cycles-pp.__GI___open64_nocancel
> > 0.15 Ä 29% +0.2 0.35 Ä 34% perf-profile.children.cycles-pp.getname_flags
> > 0.16 Ä 25% +0.2 0.38 Ä 26% perf-profile.children.cycles-pp.get_next_timer_interrupt
> > 0.18 Ä 11% +0.2 0.41 Ä 39% perf-profile.children.cycles-pp.execve
> > 0.19 Ä 5% +0.2 0.42 Ä 37% perf-profile.children.cycles-pp.__x64_sys_execve
> > 0.18 Ä 2% +0.2 0.42 Ä 37% perf-profile.children.cycles-pp.__do_execve_file
> > 0.32 Ä 18% +0.3 0.57 Ä 33% perf-profile.children.cycles-pp.__account_scheduler_latency
> > 0.21 Ä 17% +0.3 0.48 Ä 47% perf-profile.children.cycles-pp.schedule_idle
> > 0.20 Ä 19% +0.3 0.49 Ä 33% perf-profile.children.cycles-pp.tick_nohz_next_event
> > 0.21 Ä 26% +0.3 0.55 Ä 41% perf-profile.children.cycles-pp.find_busiest_group
> > 0.32 Ä 26% +0.3 0.67 Ä 52% perf-profile.children.cycles-pp.__handle_mm_fault
> > 0.22 Ä 25% +0.4 0.57 Ä 49% perf-profile.children.cycles-pp.filename_lookup
> > 0.34 Ä 27% +0.4 0.72 Ä 50% perf-profile.children.cycles-pp.handle_mm_fault
> > 0.42 Ä 32% +0.4 0.80 Ä 43% perf-profile.children.cycles-pp.shmem_getpage_gfp
> > 0.36 Ä 23% +0.4 0.77 Ä 41% perf-profile.children.cycles-pp.load_balance
> > 0.41 Ä 30% +0.4 0.82 Ä 50% perf-profile.children.cycles-pp.do_page_fault
> > 0.39 Ä 30% +0.4 0.80 Ä 50% perf-profile.children.cycles-pp.__do_page_fault
> > 0.28 Ä 22% +0.4 0.70 Ä 37% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
> > 0.43 Ä 31% +0.4 0.86 Ä 49% perf-profile.children.cycles-pp.page_fault
> > 0.31 Ä 25% +0.5 0.77 Ä 46% perf-profile.children.cycles-pp.user_path_at_empty
> > 0.36 Ä 20% +0.5 0.84 Ä 34% perf-profile.children.cycles-pp.newidle_balance
> > 0.45 Ä 21% +0.5 0.95 Ä 44% perf-profile.children.cycles-pp.vfs_statx
> > 0.46 Ä 20% +0.5 0.97 Ä 43% perf-profile.children.cycles-pp.__do_sys_newfstatat
> > 0.47 Ä 20% +0.5 0.98 Ä 44% perf-profile.children.cycles-pp.__x64_sys_newfstatat
> > 0.29 Ä 37% +0.5 0.81 Ä 32% perf-profile.children.cycles-pp.io_serial_in
> > 0.53 Ä 25% +0.5 1.06 Ä 49% perf-profile.children.cycles-pp.path_openat
> > 0.55 Ä 24% +0.5 1.09 Ä 50% perf-profile.children.cycles-pp.do_filp_open
> > 0.35 Ä 2% +0.5 0.90 Ä 60% perf-profile.children.cycles-pp.uart_console_write
> > 0.35 Ä 4% +0.6 0.91 Ä 60% perf-profile.children.cycles-pp.console_unlock
> > 0.35 Ä 4% +0.6 0.91 Ä 60% perf-profile.children.cycles-pp.univ8250_console_write
> > 0.35 Ä 4% +0.6 0.91 Ä 60% perf-profile.children.cycles-pp.serial8250_console_write
> > 0.82 Ä 23% +0.6 1.42 Ä 31% perf-profile.children.cycles-pp.__hrtimer_run_queues
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.irq_work_interrupt
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.smp_irq_work_interrupt
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.irq_work_run
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.perf_duration_warn
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.printk
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.vprintk_func
> > 0.47 Ä 28% +0.6 1.10 Ä 39% perf-profile.children.cycles-pp.vprintk_default
> > 0.47 Ä 28% +0.6 1.11 Ä 39% perf-profile.children.cycles-pp.irq_work_run_list
> > 0.49 Ä 31% +0.6 1.13 Ä 39% perf-profile.children.cycles-pp.vprintk_emit
> > 0.54 Ä 19% +0.6 1.17 Ä 38% perf-profile.children.cycles-pp.pick_next_task_fair
> > 0.32 Ä 7% +0.7 1.02 Ä 56% perf-profile.children.cycles-pp.poll_idle
> > 0.60 Ä 15% +0.7 1.31 Ä 29% perf-profile.children.cycles-pp.menu_select
> > 0.65 Ä 27% +0.7 1.36 Ä 45% perf-profile.children.cycles-pp.do_sys_open
> > 0.62 Ä 15% +0.7 1.36 Ä 31% perf-profile.children.cycles-pp.cpuidle_select
> > 0.66 Ä 26% +0.7 1.39 Ä 44% perf-profile.children.cycles-pp.__x64_sys_openat
> > 1.11 Ä 22% +0.9 2.03 Ä 31% perf-profile.children.cycles-pp.hrtimer_interrupt
> > 0.89 Ä 24% +0.9 1.81 Ä 42% perf-profile.children.cycles-pp.futex_wait_queue_me
> > 1.16 Ä 27% +1.0 2.13 Ä 36% perf-profile.children.cycles-pp.schedule
> > 0.97 Ä 23% +1.0 1.97 Ä 42% perf-profile.children.cycles-pp.futex_wait
> > 1.33 Ä 25% +1.2 2.55 Ä 39% perf-profile.children.cycles-pp.__schedule
> > 1.84 Ä 26% +1.6 3.42 Ä 36% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
> > 1.76 Ä 22% +1.6 3.41 Ä 40% perf-profile.children.cycles-pp.do_futex
> > 1.79 Ä 22% +1.7 3.49 Ä 41% perf-profile.children.cycles-pp.__x64_sys_futex
> > 2.23 Ä 20% +1.7 3.98 Ä 37% perf-profile.children.cycles-pp.apic_timer_interrupt
> > 17.73 Ä 21% +19.1 36.86 Ä 25% perf-profile.children.cycles-pp.intel_idle
> > 19.00 Ä 21% +21.1 40.13 Ä 26% perf-profile.children.cycles-pp.cpuidle_enter_state
> > 19.02 Ä 21% +21.2 40.19 Ä 26% perf-profile.children.cycles-pp.cpuidle_enter
> > 18.89 Ä 20% +21.2 40.12 Ä 27% perf-profile.children.cycles-pp.start_secondary
> > 20.14 Ä 20% +22.5 42.65 Ä 27% perf-profile.children.cycles-pp.cpu_startup_entry
> > 20.08 Ä 20% +22.5 42.59 Ä 27% perf-profile.children.cycles-pp.do_idle
> > 20.14 Ä 20% +22.5 42.66 Ä 27% perf-profile.children.cycles-pp.secondary_startup_64
> > 43.25 Ä 9% -18.6 24.63 Ä 49% perf-profile.self.cycles-pp.longest_match
> > 22.74 Ä 11% -10.8 11.97 Ä 50% perf-profile.self.cycles-pp.deflate_medium
> > 2.26 Ä 14% -1.2 1.11 Ä 44% perf-profile.self.cycles-pp.deflateSetDictionary
> > 0.51 Ä 12% -0.3 0.24 Ä 57% perf-profile.self.cycles-pp.fill_window
> > 0.07 Ä 7% +0.0 0.10 Ä 24% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
> > 0.07 Ä 7% +0.1 0.12 Ä 36% perf-profile.self.cycles-pp.syscall_return_via_sysret
> > 0.08 Ä 12% +0.1 0.14 Ä 15% perf-profile.self.cycles-pp.__update_load_avg_se
> > 0.06 +0.1 0.13 Ä 27% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
> > 0.08 Ä 25% +0.1 0.15 Ä 37% perf-profile.self.cycles-pp.__switch_to
> > 0.06 Ä 16% +0.1 0.13 Ä 29% perf-profile.self.cycles-pp.get_page_from_freelist
> > 0.00 +0.1 0.07 Ä 29% perf-profile.self.cycles-pp.__switch_to_asm
> > 0.05 +0.1 0.13 Ä 57% perf-profile.self.cycles-pp.switch_mm_irqs_off
> > 0.00 +0.1 0.08 Ä 41% perf-profile.self.cycles-pp.interrupt_entry
> > 0.00 +0.1 0.08 Ä 61% perf-profile.self.cycles-pp.run_timer_softirq
> > 0.07 Ä 23% +0.1 0.15 Ä 43% perf-profile.self.cycles-pp.__hrtimer_next_event_base
> > 0.03 Ä100% +0.1 0.12 Ä 43% perf-profile.self.cycles-pp.update_cfs_group
> > 0.08 Ä 29% +0.1 0.17 Ä 38% perf-profile.self.cycles-pp.queued_spin_lock_slowpath
> > 0.00 +0.1 0.09 Ä 29% perf-profile.self.cycles-pp.strncpy_from_user
> > 0.06 Ä 16% +0.1 0.15 Ä 24% perf-profile.self.cycles-pp.find_next_bit
> > 0.00 +0.1 0.10 Ä 35% perf-profile.self.cycles-pp.do_lookup_x
> > 0.00 +0.1 0.10 Ä 13% perf-profile.self.cycles-pp.kmem_cache_free
> > 0.06 Ä 16% +0.1 0.16 Ä 30% perf-profile.self.cycles-pp.do_idle
> > 0.03 Ä100% +0.1 0.13 Ä 18% perf-profile.self.cycles-pp.entry_SYSCALL_64
> > 0.03 Ä100% +0.1 0.14 Ä 41% perf-profile.self.cycles-pp.update_blocked_averages
> > 0.11 Ä 18% +0.1 0.22 Ä 37% perf-profile.self.cycles-pp.native_sched_clock
> > 0.07 Ä 14% +0.1 0.18 Ä 46% perf-profile.self.cycles-pp.lapic_next_deadline
> > 0.00 +0.1 0.12 Ä 65% perf-profile.self.cycles-pp.set_next_entity
> > 0.12 Ä 28% +0.1 0.27 Ä 32% perf-profile.self.cycles-pp.cpuidle_enter_state
> > 0.15 Ä 3% +0.2 0.32 Ä 39% perf-profile.self.cycles-pp.io_serial_out
> > 0.25 Ä 4% +0.2 0.48 Ä 19% perf-profile.self.cycles-pp.menu_select
> > 0.15 Ä 22% +0.3 0.42 Ä 46% perf-profile.self.cycles-pp.find_busiest_group
> > 0.29 Ä 37% +0.4 0.71 Ä 42% perf-profile.self.cycles-pp.io_serial_in
> > 0.32 Ä 7% +0.7 1.02 Ä 56% perf-profile.self.cycles-pp.poll_idle
> > 17.73 Ä 21% +19.1 36.79 Ä 25% perf-profile.self.cycles-pp.intel_idle
> >
> >
> >
> > phoronix-test-suite.compress-gzip.0.seconds
> >
> > 8 +-----------------------------------------------------------------------+
> > | O O O O O O O O |
> > 7 |-+ O O O O O O O O O |
> > 6 |-+ + + + |
> > | + : + + : + + + : |
> > 5 |-+ : : : : :: : : : : |
> > | :: : : : :: : : : :: : : |
> > 4 |:+: : : : : : : : : : : : : : : : : |
> > |: : : : : : : : : + + : : + : : : : : : : |
> > 3 |:+: : : : : : : : : : : : : : : : : : : : |
> > 2 |:+: : : : : : : : : : : : : : : : : : : : : : : |
> > |: : : : : : : : : : : : : : : : : : : : : : : : |
> > 1 |-: :: : : : : : : : : :: :: :: : : |
> > | : : : : : : : : : : : : |
> > 0 +-----------------------------------------------------------------------+
> >
> >
> > [*] bisect-good sample
> > [O] bisect-bad sample
> >
> > ***************************************************************************************************
> > lkp-cfl-d1: 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory
> >
> >
> > ***************************************************************************************************
> > lkp-skl-fpga01: 104 threads Skylake with 192G memory
> > =========================================================================================
> > compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
> > gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 413301 +3.1% 426103 vm-scalability.median
> > 0.04 Ä 2% -34.0% 0.03 Ä 12% vm-scalability.median_stddev
> > 43837589 +2.4% 44902458 vm-scalability.throughput
> > 181085 -18.7% 147221 vm-scalability.time.involuntary_context_switches
> > 12762365 Ä 2% +3.9% 13262025 vm-scalability.time.minor_page_faults
> > 7773 +2.9% 7997 vm-scalability.time.percent_of_cpu_this_job_got
> > 11449 +1.2% 11589 vm-scalability.time.system_time
> > 12024 +4.7% 12584 vm-scalability.time.user_time
> > 439194 Ä 2% +46.0% 641402 Ä 2% vm-scalability.time.voluntary_context_switches
> > 1.148e+10 +5.0% 1.206e+10 vm-scalability.workload
> > 0.00 Ä 54% +0.0 0.00 Ä 17% mpstat.cpu.all.iowait%
> > 4767597 +52.5% 7268430 Ä 41% numa-numastat.node1.local_node
> > 4781030 +52.3% 7280347 Ä 41% numa-numastat.node1.numa_hit
> > 24.75 -9.1% 22.50 Ä 2% vmstat.cpu.id
> > 37.50 +4.7% 39.25 vmstat.cpu.us
> > 6643 Ä 3% +15.1% 7647 vmstat.system.cs
> > 12220504 +33.4% 16298593 Ä 4% cpuidle.C1.time
> > 260215 Ä 6% +55.3% 404158 Ä 3% cpuidle.C1.usage
> > 4986034 Ä 3% +56.2% 7786811 Ä 2% cpuidle.POLL.time
> > 145941 Ä 3% +61.2% 235218 Ä 2% cpuidle.POLL.usage
> > 1990 +3.0% 2049 turbostat.Avg_MHz
> > 254633 Ä 6% +56.7% 398892 Ä 4% turbostat.C1
> > 0.04 +0.0 0.05 turbostat.C1%
> > 309.99 +1.5% 314.75 turbostat.RAMWatt
> > 1688 Ä 11% +17.4% 1983 Ä 5% slabinfo.UNIX.active_objs
> > 1688 Ä 11% +17.4% 1983 Ä 5% slabinfo.UNIX.num_objs
> > 2460 Ä 3% -15.8% 2072 Ä 11% slabinfo.dmaengine-unmap-16.active_objs
> > 2460 Ä 3% -15.8% 2072 Ä 11% slabinfo.dmaengine-unmap-16.num_objs
> > 2814 Ä 9% +14.6% 3225 Ä 4% slabinfo.sock_inode_cache.active_objs
> > 2814 Ä 9% +14.6% 3225 Ä 4% slabinfo.sock_inode_cache.num_objs
> > 0.67 Ä 5% +0.1 0.73 Ä 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
> > 0.68 Ä 6% +0.1 0.74 Ä 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
> > 0.05 +0.0 0.07 Ä 7% perf-profile.children.cycles-pp.schedule
> > 0.06 +0.0 0.08 Ä 6% perf-profile.children.cycles-pp.__wake_up_common
> > 0.06 Ä 7% +0.0 0.08 Ä 6% perf-profile.children.cycles-pp.wake_up_page_bit
> > 0.23 Ä 7% +0.0 0.28 Ä 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> > 0.00 +0.1 0.05 perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
> > 0.00 +0.1 0.05 perf-profile.children.cycles-pp.sys_imageblit
> > 29026 Ä 3% -26.7% 21283 Ä 44% numa-vmstat.node0.nr_inactive_anon
> > 30069 Ä 3% -20.5% 23905 Ä 26% numa-vmstat.node0.nr_shmem
> > 12120 Ä 2% -15.5% 10241 Ä 12% numa-vmstat.node0.nr_slab_reclaimable
> > 29026 Ä 3% -26.7% 21283 Ä 44% numa-vmstat.node0.nr_zone_inactive_anon
> > 4010893 +16.1% 4655889 Ä 9% numa-vmstat.node1.nr_active_anon
> > 3982581 +16.3% 4632344 Ä 9% numa-vmstat.node1.nr_anon_pages
> > 6861 +16.1% 7964 Ä 8% numa-vmstat.node1.nr_anon_transparent_hugepages
> > 2317 Ä 42% +336.9% 10125 Ä 93% numa-vmstat.node1.nr_inactive_anon
> > 6596 Ä 4% +18.2% 7799 Ä 14% numa-vmstat.node1.nr_kernel_stack
> > 9629 Ä 8% +66.4% 16020 Ä 41% numa-vmstat.node1.nr_shmem
> > 7558 Ä 3% +26.5% 9561 Ä 14% numa-vmstat.node1.nr_slab_reclaimable
> > 4010227 +16.1% 4655056 Ä 9% numa-vmstat.node1.nr_zone_active_anon
> > 2317 Ä 42% +336.9% 10125 Ä 93% numa-vmstat.node1.nr_zone_inactive_anon
> > 2859663 Ä 2% +46.2% 4179500 Ä 36% numa-vmstat.node1.numa_hit
> > 2680260 Ä 2% +49.3% 4002218 Ä 37% numa-vmstat.node1.numa_local
> > 116661 Ä 3% -26.3% 86010 Ä 44% numa-meminfo.node0.Inactive
> > 116192 Ä 3% -26.7% 85146 Ä 44% numa-meminfo.node0.Inactive(anon)
> > 48486 Ä 2% -15.5% 40966 Ä 12% numa-meminfo.node0.KReclaimable
> > 48486 Ä 2% -15.5% 40966 Ä 12% numa-meminfo.node0.SReclaimable
> > 120367 Ä 3% -20.5% 95642 Ä 26% numa-meminfo.node0.Shmem
> > 16210528 +15.2% 18673368 Ä 6% numa-meminfo.node1.Active
> > 16210394 +15.2% 18673287 Ä 6% numa-meminfo.node1.Active(anon)
> > 14170064 +15.6% 16379835 Ä 7% numa-meminfo.node1.AnonHugePages
> > 16113351 +15.3% 18577254 Ä 7% numa-meminfo.node1.AnonPages
> > 10534 Ä 33% +293.8% 41480 Ä 92% numa-meminfo.node1.Inactive
> > 9262 Ä 42% +338.2% 40589 Ä 93% numa-meminfo.node1.Inactive(anon)
> > 30235 Ä 3% +26.5% 38242 Ä 14% numa-meminfo.node1.KReclaimable
> > 6594 Ä 4% +18.3% 7802 Ä 14% numa-meminfo.node1.KernelStack
> > 17083646 +15.1% 19656922 Ä 7% numa-meminfo.node1.MemUsed
> > 30235 Ä 3% +26.5% 38242 Ä 14% numa-meminfo.node1.SReclaimable
> > 38540 Ä 8% +66.4% 64117 Ä 42% numa-meminfo.node1.Shmem
> > 106342 +19.8% 127451 Ä 11% numa-meminfo.node1.Slab
> > 9479688 +4.5% 9905902 proc-vmstat.nr_active_anon
> > 9434298 +4.5% 9856978 proc-vmstat.nr_anon_pages
> > 16194 +4.3% 16895 proc-vmstat.nr_anon_transparent_hugepages
> > 276.75 +3.6% 286.75 proc-vmstat.nr_dirtied
> > 3888633 -1.1% 3845882 proc-vmstat.nr_dirty_background_threshold
> > 7786774 -1.1% 7701168 proc-vmstat.nr_dirty_threshold
> > 39168820 -1.1% 38741444 proc-vmstat.nr_free_pages
> > 50391 +1.0% 50904 proc-vmstat.nr_slab_unreclaimable
> > 257.50 +3.6% 266.75 proc-vmstat.nr_written
> > 9479678 +4.5% 9905895 proc-vmstat.nr_zone_active_anon
> > 1501517 -5.9% 1412958 proc-vmstat.numa_hint_faults
> > 1075936 -13.1% 934706 proc-vmstat.numa_hint_faults_local
> > 17306395 +4.8% 18141722 proc-vmstat.numa_hit
> > 5211079 +4.2% 5427541 proc-vmstat.numa_huge_pte_updates
> > 17272620 +4.8% 18107691 proc-vmstat.numa_local
> > 33774 +0.8% 34031 proc-vmstat.numa_other
> > 690793 Ä 3% -13.7% 596166 Ä 2% proc-vmstat.numa_pages_migrated
> > 2.669e+09 +4.2% 2.78e+09 proc-vmstat.numa_pte_updates
> > 2.755e+09 +5.6% 2.909e+09 proc-vmstat.pgalloc_normal
> > 13573227 Ä 2% +3.6% 14060842 proc-vmstat.pgfault
> > 2.752e+09 +5.6% 2.906e+09 proc-vmstat.pgfree
> > 1.723e+08 Ä 2% +14.3% 1.97e+08 Ä 8% proc-vmstat.pgmigrate_fail
> > 690793 Ä 3% -13.7% 596166 Ä 2% proc-vmstat.pgmigrate_success
> > 5015265 +5.0% 5266730 proc-vmstat.thp_deferred_split_page
> > 5019661 +5.0% 5271482 proc-vmstat.thp_fault_alloc
> > 18284 Ä 62% -79.9% 3681 Ä172% sched_debug.cfs_rq:/.MIN_vruntime.avg
> > 1901618 Ä 62% -89.9% 192494 Ä172% sched_debug.cfs_rq:/.MIN_vruntime.max
> > 185571 Ä 62% -85.8% 26313 Ä172% sched_debug.cfs_rq:/.MIN_vruntime.stddev
> > 15241 Ä 6% -36.6% 9655 Ä 6% sched_debug.cfs_rq:/.exec_clock.stddev
> > 18284 Ä 62% -79.9% 3681 Ä172% sched_debug.cfs_rq:/.max_vruntime.avg
> > 1901618 Ä 62% -89.9% 192494 Ä172% sched_debug.cfs_rq:/.max_vruntime.max
> > 185571 Ä 62% -85.8% 26313 Ä172% sched_debug.cfs_rq:/.max_vruntime.stddev
> > 898812 Ä 7% -31.2% 618552 Ä 5% sched_debug.cfs_rq:/.min_vruntime.stddev
> > 10.30 Ä 12% +34.5% 13.86 Ä 6% sched_debug.cfs_rq:/.nr_spread_over.avg
> > 34.75 Ä 8% +95.9% 68.08 Ä 4% sched_debug.cfs_rq:/.nr_spread_over.max
> > 9.12 Ä 11% +82.3% 16.62 Ä 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
> > -1470498 -31.9% -1000709 sched_debug.cfs_rq:/.spread0.min
> > 899820 Ä 7% -31.2% 618970 Ä 5% sched_debug.cfs_rq:/.spread0.stddev
> > 1589 Ä 9% -19.2% 1284 Ä 9% sched_debug.cfs_rq:/.util_avg.max
> > 0.54 Ä 39% +7484.6% 41.08 Ä 92% sched_debug.cfs_rq:/.util_est_enqueued.min
> > 238.84 Ä 8% -33.2% 159.61 Ä 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
> > 10787 Ä 2% +13.8% 12274 sched_debug.cpu.nr_switches.avg
> > 35242 Ä 9% +32.3% 46641 Ä 25% sched_debug.cpu.nr_switches.max
> > 9139 Ä 3% +16.4% 10636 sched_debug.cpu.sched_count.avg
> > 32025 Ä 10% +34.6% 43091 Ä 27% sched_debug.cpu.sched_count.max
> > 4016 Ä 2% +14.7% 4606 Ä 5% sched_debug.cpu.sched_count.min
> > 2960 +38.3% 4093 sched_debug.cpu.sched_goidle.avg
> > 11201 Ä 24% +75.8% 19691 Ä 26% sched_debug.cpu.sched_goidle.max
> > 1099 Ä 6% +56.9% 1725 Ä 6% sched_debug.cpu.sched_goidle.min
> > 1877 Ä 10% +32.5% 2487 Ä 17% sched_debug.cpu.sched_goidle.stddev
> > 4348 Ä 3% +19.3% 5188 sched_debug.cpu.ttwu_count.avg
> > 17832 Ä 11% +78.6% 31852 Ä 29% sched_debug.cpu.ttwu_count.max
> > 1699 Ä 6% +28.2% 2178 Ä 7% sched_debug.cpu.ttwu_count.min
> > 1357 Ä 10% -22.6% 1050 Ä 4% sched_debug.cpu.ttwu_local.avg
> > 11483 Ä 5% -25.0% 8614 Ä 15% sched_debug.cpu.ttwu_local.max
> > 1979 Ä 12% -36.8% 1251 Ä 10% sched_debug.cpu.ttwu_local.stddev
> > 3.941e+10 +5.0% 4.137e+10 perf-stat.i.branch-instructions
> > 0.02 Ä 50% -0.0 0.02 Ä 5% perf-stat.i.branch-miss-rate%
> > 67.94 -3.9 63.99 perf-stat.i.cache-miss-rate%
> > 8.329e+08 -1.9% 8.17e+08 perf-stat.i.cache-misses
> > 1.224e+09 +4.5% 1.28e+09 perf-stat.i.cache-references
> > 6650 Ä 3% +15.5% 7678 perf-stat.i.context-switches
> > 1.64 -1.8% 1.61 perf-stat.i.cpi
> > 2.037e+11 +2.8% 2.095e+11 perf-stat.i.cpu-cycles
> > 257.56 -4.0% 247.13 perf-stat.i.cpu-migrations
> > 244.94 +4.5% 255.91 perf-stat.i.cycles-between-cache-misses
> > 1189446 Ä 2% +3.2% 1227527 perf-stat.i.dTLB-load-misses
> > 2.669e+10 +4.7% 2.794e+10 perf-stat.i.dTLB-loads
> > 0.00 Ä 7% -0.0 0.00 perf-stat.i.dTLB-store-miss-rate%
> > 337782 +4.5% 353044 perf-stat.i.dTLB-store-misses
> > 9.096e+09 +4.7% 9.526e+09 perf-stat.i.dTLB-stores
> > 39.50 +2.1 41.64 perf-stat.i.iTLB-load-miss-rate%
> > 296305 Ä 2% +9.0% 323020 perf-stat.i.iTLB-load-misses
> > 1.238e+11 +4.9% 1.299e+11 perf-stat.i.instructions
> > 428249 Ä 2% -4.4% 409553 perf-stat.i.instructions-per-iTLB-miss
> > 0.61 +1.6% 0.62 perf-stat.i.ipc
> > 44430 +3.8% 46121 perf-stat.i.minor-faults
> > 54.82 +3.9 58.73 perf-stat.i.node-load-miss-rate%
> > 68519419 Ä 4% -11.7% 60479057 Ä 6% perf-stat.i.node-load-misses
> > 49879161 Ä 3% -20.7% 39554915 Ä 4% perf-stat.i.node-loads
> > 44428 +3.8% 46119 perf-stat.i.page-faults
> > 0.02 -0.0 0.01 Ä 5% perf-stat.overall.branch-miss-rate%
> > 68.03 -4.2 63.83 perf-stat.overall.cache-miss-rate%
> > 1.65 -2.0% 1.61 perf-stat.overall.cpi
> > 244.61 +4.8% 256.41 perf-stat.overall.cycles-between-cache-misses
> > 30.21 +2.2 32.38 perf-stat.overall.iTLB-load-miss-rate%
> > 417920 Ä 2% -3.7% 402452 perf-stat.overall.instructions-per-iTLB-miss
> > 0.61 +2.1% 0.62 perf-stat.overall.ipc
> > 57.84 +2.6 60.44 perf-stat.overall.node-load-miss-rate%
> > 3.925e+10 +5.1% 4.124e+10 perf-stat.ps.branch-instructions
> > 8.295e+08 -1.8% 8.144e+08 perf-stat.ps.cache-misses
> > 1.219e+09 +4.6% 1.276e+09 perf-stat.ps.cache-references
> > 6625 Ä 3% +15.4% 7648 perf-stat.ps.context-switches
> > 2.029e+11 +2.9% 2.088e+11 perf-stat.ps.cpu-cycles
> > 256.82 -4.2% 246.09 perf-stat.ps.cpu-migrations
> > 1184763 Ä 2% +3.3% 1223366 perf-stat.ps.dTLB-load-misses
> > 2.658e+10 +4.8% 2.786e+10 perf-stat.ps.dTLB-loads
> > 336658 +4.5% 351710 perf-stat.ps.dTLB-store-misses
> > 9.059e+09 +4.8% 9.497e+09 perf-stat.ps.dTLB-stores
> > 295140 Ä 2% +9.0% 321824 perf-stat.ps.iTLB-load-misses
> > 1.233e+11 +5.0% 1.295e+11 perf-stat.ps.instructions
> > 44309 +3.7% 45933 perf-stat.ps.minor-faults
> > 68208972 Ä 4% -11.6% 60272675 Ä 6% perf-stat.ps.node-load-misses
> > 49689740 Ä 3% -20.7% 39401789 Ä 4% perf-stat.ps.node-loads
> > 44308 +3.7% 45932 perf-stat.ps.page-faults
> > 3.732e+13 +5.1% 3.922e+13 perf-stat.total.instructions
> > 14949 Ä 2% +14.5% 17124 Ä 11% softirqs.CPU0.SCHED
> > 9940 +37.8% 13700 Ä 24% softirqs.CPU1.SCHED
> > 9370 Ä 2% +28.2% 12014 Ä 16% softirqs.CPU10.SCHED
> > 17637 Ä 2% -16.5% 14733 Ä 16% softirqs.CPU101.SCHED
> > 17846 Ä 3% -17.4% 14745 Ä 16% softirqs.CPU103.SCHED
> > 9552 +24.7% 11916 Ä 17% softirqs.CPU11.SCHED
> > 9210 Ä 5% +27.9% 11784 Ä 16% softirqs.CPU12.SCHED
> > 9378 Ä 3% +27.7% 11974 Ä 16% softirqs.CPU13.SCHED
> > 9164 Ä 2% +29.4% 11856 Ä 18% softirqs.CPU14.SCHED
> > 9215 +21.2% 11170 Ä 19% softirqs.CPU15.SCHED
> > 9118 Ä 2% +29.1% 11772 Ä 16% softirqs.CPU16.SCHED
> > 9413 +29.2% 12165 Ä 18% softirqs.CPU17.SCHED
> > 9309 Ä 2% +29.9% 12097 Ä 17% softirqs.CPU18.SCHED
> > 9423 +26.1% 11880 Ä 15% softirqs.CPU19.SCHED
> > 9010 Ä 7% +37.8% 12420 Ä 18% softirqs.CPU2.SCHED
> > 9382 Ä 3% +27.0% 11916 Ä 15% softirqs.CPU20.SCHED
> > 9102 Ä 4% +30.0% 11830 Ä 16% softirqs.CPU21.SCHED
> > 9543 Ä 3% +23.4% 11780 Ä 18% softirqs.CPU22.SCHED
> > 8998 Ä 5% +29.2% 11630 Ä 18% softirqs.CPU24.SCHED
> > 9254 Ä 2% +23.9% 11462 Ä 19% softirqs.CPU25.SCHED
> > 18450 Ä 4% -16.9% 15341 Ä 16% softirqs.CPU26.SCHED
> > 17551 Ä 4% -14.8% 14956 Ä 13% softirqs.CPU27.SCHED
> > 17575 Ä 4% -14.6% 15010 Ä 14% softirqs.CPU28.SCHED
> > 17515 Ä 5% -14.2% 15021 Ä 13% softirqs.CPU29.SCHED
> > 17715 Ä 2% -16.1% 14856 Ä 13% softirqs.CPU30.SCHED
> > 17754 Ä 4% -16.1% 14904 Ä 13% softirqs.CPU31.SCHED
> > 17675 Ä 2% -17.0% 14679 Ä 21% softirqs.CPU32.SCHED
> > 17625 Ä 2% -16.0% 14813 Ä 13% softirqs.CPU34.SCHED
> > 17619 Ä 2% -14.7% 15024 Ä 14% softirqs.CPU35.SCHED
> > 17887 Ä 3% -17.0% 14841 Ä 14% softirqs.CPU36.SCHED
> > 17658 Ä 3% -16.3% 14771 Ä 12% softirqs.CPU38.SCHED
> > 17501 Ä 2% -15.3% 14816 Ä 14% softirqs.CPU39.SCHED
> > 9360 Ä 2% +25.4% 11740 Ä 14% softirqs.CPU4.SCHED
> > 17699 Ä 4% -16.2% 14827 Ä 14% softirqs.CPU42.SCHED
> > 17580 Ä 3% -16.5% 14679 Ä 15% softirqs.CPU43.SCHED
> > 17658 Ä 3% -17.1% 14644 Ä 14% softirqs.CPU44.SCHED
> > 17452 Ä 4% -14.0% 15001 Ä 15% softirqs.CPU46.SCHED
> > 17599 Ä 4% -17.4% 14544 Ä 14% softirqs.CPU47.SCHED
> > 17792 Ä 3% -16.5% 14864 Ä 14% softirqs.CPU48.SCHED
> > 17333 Ä 2% -16.7% 14445 Ä 14% softirqs.CPU49.SCHED
> > 9483 +32.3% 12547 Ä 24% softirqs.CPU5.SCHED
> > 17842 Ä 3% -15.9% 14997 Ä 16% softirqs.CPU51.SCHED
> > 9051 Ä 2% +23.3% 11160 Ä 13% softirqs.CPU52.SCHED
> > 9385 Ä 3% +25.2% 11752 Ä 16% softirqs.CPU53.SCHED
> > 9446 Ä 6% +24.9% 11798 Ä 14% softirqs.CPU54.SCHED
> > 10006 Ä 6% +22.4% 12249 Ä 14% softirqs.CPU55.SCHED
> > 9657 +22.0% 11780 Ä 16% softirqs.CPU57.SCHED
> > 9399 +27.5% 11980 Ä 15% softirqs.CPU58.SCHED
> > 9234 Ä 3% +27.7% 11795 Ä 14% softirqs.CPU59.SCHED
> > 9726 Ä 6% +24.0% 12062 Ä 16% softirqs.CPU6.SCHED
> > 9165 Ä 2% +23.7% 11342 Ä 14% softirqs.CPU60.SCHED
> > 9357 Ä 2% +25.8% 11774 Ä 15% softirqs.CPU61.SCHED
> > 9406 Ä 3% +25.2% 11780 Ä 16% softirqs.CPU62.SCHED
> > 9489 +23.2% 11688 Ä 15% softirqs.CPU63.SCHED
> > 9399 Ä 2% +23.5% 11604 Ä 16% softirqs.CPU65.SCHED
> > 8950 Ä 2% +31.6% 11774 Ä 16% softirqs.CPU66.SCHED
> > 9260 +21.7% 11267 Ä 19% softirqs.CPU67.SCHED
> > 9187 +27.1% 11672 Ä 17% softirqs.CPU68.SCHED
> > 9443 Ä 2% +25.5% 11847 Ä 17% softirqs.CPU69.SCHED
> > 9144 Ä 3% +28.0% 11706 Ä 16% softirqs.CPU7.SCHED
> > 9276 Ä 2% +28.0% 11871 Ä 17% softirqs.CPU70.SCHED
> > 9494 +21.4% 11526 Ä 14% softirqs.CPU71.SCHED
> > 9124 Ä 3% +27.8% 11657 Ä 17% softirqs.CPU72.SCHED
> > 9189 Ä 3% +25.9% 11568 Ä 16% softirqs.CPU73.SCHED
> > 9392 Ä 2% +23.7% 11619 Ä 16% softirqs.CPU74.SCHED
> > 17821 Ä 3% -14.7% 15197 Ä 17% softirqs.CPU78.SCHED
> > 17581 Ä 2% -15.7% 14827 Ä 15% softirqs.CPU79.SCHED
> > 9123 +28.2% 11695 Ä 15% softirqs.CPU8.SCHED
> > 17524 Ä 2% -16.7% 14601 Ä 14% softirqs.CPU80.SCHED
> > 17644 Ä 3% -16.2% 14782 Ä 14% softirqs.CPU81.SCHED
> > 17705 Ä 3% -18.6% 14414 Ä 22% softirqs.CPU84.SCHED
> > 17679 Ä 2% -14.1% 15185 Ä 11% softirqs.CPU85.SCHED
> > 17434 Ä 3% -15.5% 14724 Ä 14% softirqs.CPU86.SCHED
> > 17409 Ä 2% -15.0% 14794 Ä 13% softirqs.CPU87.SCHED
> > 17470 Ä 3% -15.7% 14730 Ä 13% softirqs.CPU88.SCHED
> > 17748 Ä 4% -17.1% 14721 Ä 12% softirqs.CPU89.SCHED
> > 9323 +28.0% 11929 Ä 17% softirqs.CPU9.SCHED
> > 17471 Ä 2% -16.9% 14525 Ä 13% softirqs.CPU90.SCHED
> > 17900 Ä 3% -17.0% 14850 Ä 14% softirqs.CPU94.SCHED
> > 17599 Ä 4% -17.4% 14544 Ä 15% softirqs.CPU95.SCHED
> > 17697 Ä 4% -17.7% 14569 Ä 13% softirqs.CPU96.SCHED
> > 17561 Ä 3% -15.1% 14901 Ä 13% softirqs.CPU97.SCHED
> > 17404 Ä 3% -16.1% 14601 Ä 13% softirqs.CPU98.SCHED
> > 17802 Ä 3% -19.4% 14344 Ä 15% softirqs.CPU99.SCHED
> > 1310 Ä 10% -17.0% 1088 Ä 5% interrupts.CPU1.RES:Rescheduling_interrupts
> > 3427 +13.3% 3883 Ä 9% interrupts.CPU10.CAL:Function_call_interrupts
> > 736.50 Ä 20% +34.4% 989.75 Ä 17% interrupts.CPU100.RES:Rescheduling_interrupts
> > 3421 Ä 3% +14.6% 3921 Ä 9% interrupts.CPU101.CAL:Function_call_interrupts
> > 4873 Ä 8% +16.2% 5662 Ä 7% interrupts.CPU101.NMI:Non-maskable_interrupts
> > 4873 Ä 8% +16.2% 5662 Ä 7% interrupts.CPU101.PMI:Performance_monitoring_interrupts
> > 629.50 Ä 19% +83.2% 1153 Ä 46% interrupts.CPU101.RES:Rescheduling_interrupts
> > 661.75 Ä 14% +25.7% 832.00 Ä 13% interrupts.CPU102.RES:Rescheduling_interrupts
> > 4695 Ä 5% +15.5% 5420 Ä 9% interrupts.CPU103.NMI:Non-maskable_interrupts
> > 4695 Ä 5% +15.5% 5420 Ä 9% interrupts.CPU103.PMI:Performance_monitoring_interrupts
> > 3460 +12.1% 3877 Ä 9% interrupts.CPU11.CAL:Function_call_interrupts
> > 691.50 Ä 7% +41.0% 975.00 Ä 32% interrupts.CPU19.RES:Rescheduling_interrupts
> > 3413 Ä 2% +13.4% 3870 Ä 10% interrupts.CPU20.CAL:Function_call_interrupts
> > 3413 Ä 2% +13.4% 3871 Ä 10% interrupts.CPU22.CAL:Function_call_interrupts
> > 863.00 Ä 36% +45.3% 1254 Ä 24% interrupts.CPU23.RES:Rescheduling_interrupts
> > 659.75 Ä 12% +83.4% 1209 Ä 20% interrupts.CPU26.RES:Rescheduling_interrupts
> > 615.00 Ä 10% +87.8% 1155 Ä 14% interrupts.CPU27.RES:Rescheduling_interrupts
> > 663.75 Ä 5% +67.9% 1114 Ä 7% interrupts.CPU28.RES:Rescheduling_interrupts
> > 3421 Ä 4% +13.4% 3879 Ä 9% interrupts.CPU29.CAL:Function_call_interrupts
> > 805.25 Ä 16% +33.0% 1071 Ä 15% interrupts.CPU29.RES:Rescheduling_interrupts
> > 3482 Ä 3% +11.0% 3864 Ä 8% interrupts.CPU3.CAL:Function_call_interrupts
> > 819.75 Ä 19% +48.4% 1216 Ä 12% interrupts.CPU30.RES:Rescheduling_interrupts
> > 777.25 Ä 8% +31.6% 1023 Ä 6% interrupts.CPU31.RES:Rescheduling_interrupts
> > 844.50 Ä 25% +41.7% 1196 Ä 20% interrupts.CPU32.RES:Rescheduling_interrupts
> > 722.75 Ä 14% +94.2% 1403 Ä 26% interrupts.CPU33.RES:Rescheduling_interrupts
> > 3944 Ä 25% +36.8% 5394 Ä 9% interrupts.CPU34.NMI:Non-maskable_interrupts
> > 3944 Ä 25% +36.8% 5394 Ä 9% interrupts.CPU34.PMI:Performance_monitoring_interrupts
> > 781.75 Ä 9% +45.3% 1136 Ä 27% interrupts.CPU34.RES:Rescheduling_interrupts
> > 735.50 Ä 9% +33.3% 980.75 Ä 4% interrupts.CPU35.RES:Rescheduling_interrupts
> > 691.75 Ä 10% +41.6% 979.50 Ä 13% interrupts.CPU36.RES:Rescheduling_interrupts
> > 727.00 Ä 16% +47.7% 1074 Ä 15% interrupts.CPU37.RES:Rescheduling_interrupts
> > 4413 Ä 7% +24.9% 5511 Ä 9% interrupts.CPU38.NMI:Non-maskable_interrupts
> > 4413 Ä 7% +24.9% 5511 Ä 9% interrupts.CPU38.PMI:Performance_monitoring_interrupts
> > 708.75 Ä 25% +62.6% 1152 Ä 22% interrupts.CPU38.RES:Rescheduling_interrupts
> > 666.50 Ä 7% +57.8% 1052 Ä 13% interrupts.CPU39.RES:Rescheduling_interrupts
> > 765.75 Ä 11% +25.2% 958.75 Ä 14% interrupts.CPU4.RES:Rescheduling_interrupts
> > 3395 Ä 2% +15.1% 3908 Ä 10% interrupts.CPU40.CAL:Function_call_interrupts
> > 770.00 Ä 16% +45.3% 1119 Ä 18% interrupts.CPU40.RES:Rescheduling_interrupts
> > 740.50 Ä 26% +61.9% 1198 Ä 19% interrupts.CPU41.RES:Rescheduling_interrupts
> > 3459 Ä 2% +12.9% 3905 Ä 11% interrupts.CPU42.CAL:Function_call_interrupts
> > 4530 Ä 5% +22.8% 5564 Ä 9% interrupts.CPU42.NMI:Non-maskable_interrupts
> > 4530 Ä 5% +22.8% 5564 Ä 9% interrupts.CPU42.PMI:Performance_monitoring_interrupts
> > 3330 Ä 25% +60.0% 5328 Ä 10% interrupts.CPU44.NMI:Non-maskable_interrupts
> > 3330 Ä 25% +60.0% 5328 Ä 10% interrupts.CPU44.PMI:Performance_monitoring_interrupts
> > 686.25 Ä 9% +48.4% 1018 Ä 10% interrupts.CPU44.RES:Rescheduling_interrupts
> > 702.00 Ä 15% +38.6% 973.25 Ä 5% interrupts.CPU45.RES:Rescheduling_interrupts
> > 4742 Ä 7% +19.3% 5657 Ä 8% interrupts.CPU46.NMI:Non-maskable_interrupts
> > 4742 Ä 7% +19.3% 5657 Ä 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
> > 732.75 Ä 6% +51.9% 1113 Ä 7% interrupts.CPU46.RES:Rescheduling_interrupts
> > 775.50 Ä 17% +41.3% 1095 Ä 6% interrupts.CPU47.RES:Rescheduling_interrupts
> > 670.75 Ä 5% +60.7% 1078 Ä 6% interrupts.CPU48.RES:Rescheduling_interrupts
> > 4870 Ä 8% +16.5% 5676 Ä 7% interrupts.CPU49.NMI:Non-maskable_interrupts
> > 4870 Ä 8% +16.5% 5676 Ä 7% interrupts.CPU49.PMI:Performance_monitoring_interrupts
> > 694.75 Ä 12% +25.8% 874.00 Ä 11% interrupts.CPU49.RES:Rescheduling_interrupts
> > 686.00 Ä 9% +52.0% 1042 Ä 20% interrupts.CPU50.RES:Rescheduling_interrupts
> > 3361 +17.2% 3938 Ä 9% interrupts.CPU51.CAL:Function_call_interrupts
> > 4707 Ä 6% +16.0% 5463 Ä 8% interrupts.CPU51.NMI:Non-maskable_interrupts
> > 4707 Ä 6% +16.0% 5463 Ä 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
> > 638.75 Ä 12% +28.6% 821.25 Ä 15% interrupts.CPU54.RES:Rescheduling_interrupts
> > 677.50 Ä 8% +51.8% 1028 Ä 29% interrupts.CPU58.RES:Rescheduling_interrupts
> > 3465 Ä 2% +12.0% 3880 Ä 9% interrupts.CPU6.CAL:Function_call_interrupts
> > 641.25 Ä 2% +26.1% 808.75 Ä 10% interrupts.CPU60.RES:Rescheduling_interrupts
> > 599.75 Ä 2% +45.6% 873.50 Ä 8% interrupts.CPU62.RES:Rescheduling_interrupts
> > 661.50 Ä 9% +52.4% 1008 Ä 27% interrupts.CPU63.RES:Rescheduling_interrupts
> > 611.00 Ä 12% +31.1% 801.00 Ä 13% interrupts.CPU69.RES:Rescheduling_interrupts
> > 3507 Ä 2% +10.8% 3888 Ä 9% interrupts.CPU7.CAL:Function_call_interrupts
> > 664.00 Ä 5% +32.3% 878.50 Ä 23% interrupts.CPU70.RES:Rescheduling_interrupts
> > 5780 Ä 9% -38.8% 3540 Ä 37% interrupts.CPU73.NMI:Non-maskable_interrupts
> > 5780 Ä 9% -38.8% 3540 Ä 37% interrupts.CPU73.PMI:Performance_monitoring_interrupts
> > 5787 Ä 9% -26.7% 4243 Ä 28% interrupts.CPU76.NMI:Non-maskable_interrupts
> > 5787 Ä 9% -26.7% 4243 Ä 28% interrupts.CPU76.PMI:Performance_monitoring_interrupts
> > 751.50 Ä 15% +88.0% 1413 Ä 37% interrupts.CPU78.RES:Rescheduling_interrupts
> > 725.50 Ä 12% +82.9% 1327 Ä 36% interrupts.CPU79.RES:Rescheduling_interrupts
> > 714.00 Ä 18% +33.2% 951.00 Ä 15% interrupts.CPU80.RES:Rescheduling_interrupts
> > 706.25 Ä 19% +55.6% 1098 Ä 27% interrupts.CPU82.RES:Rescheduling_interrupts
> > 4524 Ä 6% +19.6% 5409 Ä 8% interrupts.CPU83.NMI:Non-maskable_interrupts
> > 4524 Ä 6% +19.6% 5409 Ä 8% interrupts.CPU83.PMI:Performance_monitoring_interrupts
> > 666.75 Ä 15% +37.3% 915.50 Ä 4% interrupts.CPU83.RES:Rescheduling_interrupts
> > 782.50 Ä 26% +57.6% 1233 Ä 21% interrupts.CPU84.RES:Rescheduling_interrupts
> > 622.75 Ä 12% +77.8% 1107 Ä 17% interrupts.CPU85.RES:Rescheduling_interrupts
> > 3465 Ä 3% +13.5% 3933 Ä 9% interrupts.CPU86.CAL:Function_call_interrupts
> > 714.75 Ä 14% +47.0% 1050 Ä 10% interrupts.CPU86.RES:Rescheduling_interrupts
> > 3519 Ä 2% +11.7% 3929 Ä 9% interrupts.CPU87.CAL:Function_call_interrupts
> > 582.75 Ä 10% +54.2% 898.75 Ä 11% interrupts.CPU87.RES:Rescheduling_interrupts
> > 713.00 Ä 10% +36.6% 974.25 Ä 11% interrupts.CPU88.RES:Rescheduling_interrupts
> > 690.50 Ä 13% +53.0% 1056 Ä 13% interrupts.CPU89.RES:Rescheduling_interrupts
> > 3477 +11.0% 3860 Ä 8% interrupts.CPU9.CAL:Function_call_interrupts
> > 684.50 Ä 14% +39.7% 956.25 Ä 11% interrupts.CPU90.RES:Rescheduling_interrupts
> > 3946 Ä 21% +39.8% 5516 Ä 10% interrupts.CPU91.NMI:Non-maskable_interrupts
> > 3946 Ä 21% +39.8% 5516 Ä 10% interrupts.CPU91.PMI:Performance_monitoring_interrupts
> > 649.00 Ä 13% +54.3% 1001 Ä 6% interrupts.CPU91.RES:Rescheduling_interrupts
> > 674.25 Ä 21% +39.5% 940.25 Ä 11% interrupts.CPU92.RES:Rescheduling_interrupts
> > 3971 Ä 26% +41.2% 5606 Ä 8% interrupts.CPU94.NMI:Non-maskable_interrupts
> > 3971 Ä 26% +41.2% 5606 Ä 8% interrupts.CPU94.PMI:Performance_monitoring_interrupts
> > 4129 Ä 22% +33.2% 5499 Ä 9% interrupts.CPU95.NMI:Non-maskable_interrupts
> > 4129 Ä 22% +33.2% 5499 Ä 9% interrupts.CPU95.PMI:Performance_monitoring_interrupts
> > 685.75 Ä 14% +38.0% 946.50 Ä 9% interrupts.CPU96.RES:Rescheduling_interrupts
> > 4630 Ä 11% +18.3% 5477 Ä 8% interrupts.CPU97.NMI:Non-maskable_interrupts
> > 4630 Ä 11% +18.3% 5477 Ä 8% interrupts.CPU97.PMI:Performance_monitoring_interrupts
> > 4835 Ä 9% +16.3% 5622 Ä 9% interrupts.CPU98.NMI:Non-maskable_interrupts
> > 4835 Ä 9% +16.3% 5622 Ä 9% interrupts.CPU98.PMI:Performance_monitoring_interrupts
> > 596.25 Ä 11% +81.8% 1083 Ä 9% interrupts.CPU98.RES:Rescheduling_interrupts
> > 674.75 Ä 17% +43.7% 969.50 Ä 5% interrupts.CPU99.RES:Rescheduling_interrupts
> > 78.25 Ä 13% +21.4% 95.00 Ä 10% interrupts.IWI:IRQ_work_interrupts
> > 85705 Ä 6% +26.0% 107990 Ä 6% interrupts.RES:Rescheduling_interrupts
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/testcase/testtime/ucode:
> > scheduler/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/4194304/lkp-bdw-ep6/stress-ng/1s/0xb000038
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 887157 Ä 4% -23.1% 682080 Ä 3% stress-ng.fault.ops
> > 887743 Ä 4% -23.1% 682337 Ä 3% stress-ng.fault.ops_per_sec
> > 9537184 Ä 10% -21.2% 7518352 Ä 14% stress-ng.hrtimers.ops_per_sec
> > 360922 Ä 13% -21.1% 284734 Ä 6% stress-ng.kill.ops
> > 361115 Ä 13% -21.1% 284810 Ä 6% stress-ng.kill.ops_per_sec
> > 23260649 -26.9% 17006477 Ä 24% stress-ng.mq.ops
> > 23255884 -26.9% 17004540 Ä 24% stress-ng.mq.ops_per_sec
> > 3291588 Ä 3% +42.5% 4690316 Ä 2% stress-ng.schedpolicy.ops
> > 3327913 Ä 3% +41.5% 4709770 Ä 2% stress-ng.schedpolicy.ops_per_sec
> > 48.14 -2.2% 47.09 stress-ng.time.elapsed_time
> > 48.14 -2.2% 47.09 stress-ng.time.elapsed_time.max
> > 5480 +3.7% 5681 stress-ng.time.percent_of_cpu_this_job_got
> > 2249 +1.3% 2278 stress-ng.time.system_time
> > 902759 Ä 4% -22.6% 698616 Ä 3% proc-vmstat.unevictable_pgs_culled
> > 98767954 Ä 7% +16.4% 1.15e+08 Ä 7% cpuidle.C1.time
> > 1181676 Ä 12% -43.2% 671022 Ä 37% cpuidle.C6.usage
> > 2.21 Ä 7% +0.4 2.62 Ä 10% turbostat.C1%
> > 1176838 Ä 12% -43.2% 668921 Ä 37% turbostat.C6
> > 3961223 Ä 4% +12.8% 4469620 Ä 5% vmstat.memory.cache
> > 439.50 Ä 3% +14.7% 504.00 Ä 9% vmstat.procs.r
> > 0.42 Ä 7% -15.6% 0.35 Ä 13% sched_debug.cfs_rq:/.nr_running.stddev
> > 0.00 Ä 4% -18.1% 0.00 Ä 16% sched_debug.cpu.next_balance.stddev
> > 0.41 Ä 7% -15.1% 0.35 Ä 13% sched_debug.cpu.nr_running.stddev
> > 9367 Ä 9% -12.8% 8166 Ä 2% softirqs.CPU1.SCHED
> > 35143 Ä 6% -12.0% 30930 Ä 2% softirqs.CPU22.TIMER
> > 31997 Ä 4% -7.5% 29595 Ä 2% softirqs.CPU27.TIMER
> > 3.64 Ä173% -100.0% 0.00 iostat.sda.await.max
> > 3.64 Ä173% -100.0% 0.00 iostat.sda.r_await.max
> > 3.90 Ä173% -100.0% 0.00 iostat.sdc.await.max
> > 3.90 Ä173% -100.0% 0.00 iostat.sdc.r_await.max
> > 12991737 Ä 10% +61.5% 20979642 Ä 8% numa-numastat.node0.local_node
> > 13073590 Ä 10% +61.1% 21059448 Ä 8% numa-numastat.node0.numa_hit
> > 20903562 Ä 3% -32.2% 14164789 Ä 3% numa-numastat.node1.local_node
> > 20993788 Ä 3% -32.1% 14245636 Ä 3% numa-numastat.node1.numa_hit
> > 90229 Ä 4% -10.4% 80843 Ä 9% numa-numastat.node1.other_node
> > 50.75 Ä 90% +1732.0% 929.75 Ä147% interrupts.CPU23.IWI:IRQ_work_interrupts
> > 40391 Ä 59% -57.0% 17359 Ä 11% interrupts.CPU24.RES:Rescheduling_interrupts
> > 65670 Ä 11% -48.7% 33716 Ä 54% interrupts.CPU42.RES:Rescheduling_interrupts
> > 42201 Ä 46% -57.1% 18121 Ä 35% interrupts.CPU49.RES:Rescheduling_interrupts
> > 293869 Ä 44% +103.5% 598082 Ä 23% interrupts.CPU52.LOC:Local_timer_interrupts
> > 17367 Ä 8% +120.5% 38299 Ä 44% interrupts.CPU55.RES:Rescheduling_interrupts
> > 1.127e+08 +3.8% 1.17e+08 Ä 2% perf-stat.i.branch-misses
> > 11.10 +1.2 12.26 Ä 6% perf-stat.i.cache-miss-rate%
> > 4.833e+10 Ä 3% +4.7% 5.06e+10 perf-stat.i.instructions
> > 15009442 Ä 4% +14.3% 17150138 Ä 3% perf-stat.i.node-load-misses
> > 47.12 Ä 5% +3.2 50.37 Ä 5% perf-stat.i.node-store-miss-rate%
> > 6016833 Ä 7% +17.0% 7036803 Ä 3% perf-stat.i.node-store-misses
> > 1.044e+10 Ä 2% +4.0% 1.086e+10 perf-stat.ps.branch-instructions
> > 1.364e+10 Ä 3% +4.0% 1.418e+10 perf-stat.ps.dTLB-loads
> > 4.804e+10 Ä 2% +4.1% 5.003e+10 perf-stat.ps.instructions
> > 14785608 Ä 5% +11.3% 16451530 Ä 3% perf-stat.ps.node-load-misses
> > 5968712 Ä 7% +13.4% 6769847 Ä 3% perf-stat.ps.node-store-misses
> > 13588 Ä 4% +29.4% 17585 Ä 9% slabinfo.Acpi-State.active_objs
> > 13588 Ä 4% +29.4% 17585 Ä 9% slabinfo.Acpi-State.num_objs
> > 20859 Ä 3% -8.6% 19060 Ä 4% slabinfo.kmalloc-192.num_objs
> > 488.00 Ä 25% +41.0% 688.00 Ä 5% slabinfo.kmalloc-rcl-128.active_objs
> > 488.00 Ä 25% +41.0% 688.00 Ä 5% slabinfo.kmalloc-rcl-128.num_objs
> > 39660 Ä 3% +11.8% 44348 Ä 2% slabinfo.radix_tree_node.active_objs
> > 44284 Ä 3% +12.3% 49720 slabinfo.radix_tree_node.num_objs
> > 5811 Ä 15% +16.1% 6746 Ä 14% slabinfo.sighand_cache.active_objs
> > 402.00 Ä 15% +17.5% 472.50 Ä 14% slabinfo.sighand_cache.active_slabs
> > 6035 Ä 15% +17.5% 7091 Ä 14% slabinfo.sighand_cache.num_objs
> > 402.00 Ä 15% +17.5% 472.50 Ä 14% slabinfo.sighand_cache.num_slabs
> > 10282 Ä 10% +12.9% 11604 Ä 9% slabinfo.signal_cache.active_objs
> > 11350 Ä 10% +12.8% 12808 Ä 9% slabinfo.signal_cache.num_objs
> > 732920 Ä 9% +162.0% 1919987 Ä 11% numa-meminfo.node0.Active
> > 732868 Ä 9% +162.0% 1919814 Ä 11% numa-meminfo.node0.Active(anon)
> > 545019 Ä 6% +61.0% 877443 Ä 17% numa-meminfo.node0.AnonHugePages
> > 695015 Ä 10% +46.8% 1020150 Ä 14% numa-meminfo.node0.AnonPages
> > 638322 Ä 4% +448.2% 3499399 Ä 5% numa-meminfo.node0.FilePages
> > 81008 Ä 14% +2443.4% 2060329 Ä 3% numa-meminfo.node0.Inactive
> > 80866 Ä 14% +2447.4% 2060022 Ä 3% numa-meminfo.node0.Inactive(anon)
> > 86504 Ä 10% +2287.3% 2065084 Ä 3% numa-meminfo.node0.Mapped
> > 2010104 +160.8% 5242366 Ä 5% numa-meminfo.node0.MemUsed
> > 16453 Ä 15% +159.2% 42640 numa-meminfo.node0.PageTables
> > 112769 Ä 13% +2521.1% 2955821 Ä 7% numa-meminfo.node0.Shmem
> > 1839527 Ä 4% -60.2% 732645 Ä 23% numa-meminfo.node1.Active
> > 1839399 Ä 4% -60.2% 732637 Ä 23% numa-meminfo.node1.Active(anon)
> > 982237 Ä 7% -45.9% 531445 Ä 27% numa-meminfo.node1.AnonHugePages
> > 1149348 Ä 8% -41.2% 676067 Ä 25% numa-meminfo.node1.AnonPages
> > 3170649 Ä 4% -77.2% 723230 Ä 7% numa-meminfo.node1.FilePages
> > 1960718 Ä 4% -91.8% 160773 Ä 31% numa-meminfo.node1.Inactive
> > 1960515 Ä 4% -91.8% 160722 Ä 31% numa-meminfo.node1.Inactive(anon)
> > 118489 Ä 11% -20.2% 94603 Ä 3% numa-meminfo.node1.KReclaimable
> > 1966065 Ä 4% -91.5% 166789 Ä 29% numa-meminfo.node1.Mapped
> > 5034310 Ä 3% -60.2% 2003121 Ä 9% numa-meminfo.node1.MemUsed
> > 42684 Ä 10% -64.2% 15283 Ä 21% numa-meminfo.node1.PageTables
> > 118489 Ä 11% -20.2% 94603 Ä 3% numa-meminfo.node1.SReclaimable
> > 2644708 Ä 5% -91.9% 214268 Ä 24% numa-meminfo.node1.Shmem
> > 147513 Ä 20% +244.2% 507737 Ä 7% numa-vmstat.node0.nr_active_anon
> > 137512 Ä 21% +105.8% 282999 Ä 3% numa-vmstat.node0.nr_anon_pages
> > 210.25 Ä 33% +124.7% 472.50 Ä 11% numa-vmstat.node0.nr_anon_transparent_hugepages
> > 158008 Ä 4% +454.7% 876519 Ä 6% numa-vmstat.node0.nr_file_pages
> > 18416 Ä 27% +2711.4% 517747 Ä 3% numa-vmstat.node0.nr_inactive_anon
> > 26255 Ä 22% +34.3% 35251 Ä 10% numa-vmstat.node0.nr_kernel_stack
> > 19893 Ä 23% +2509.5% 519129 Ä 3% numa-vmstat.node0.nr_mapped
> > 3928 Ä 22% +179.4% 10976 Ä 4% numa-vmstat.node0.nr_page_table_pages
> > 26623 Ä 18% +2681.9% 740635 Ä 7% numa-vmstat.node0.nr_shmem
> > 147520 Ä 20% +244.3% 507885 Ä 7% numa-vmstat.node0.nr_zone_active_anon
> > 18415 Ä 27% +2711.5% 517739 Ä 3% numa-vmstat.node0.nr_zone_inactive_anon
> > 6937137 Ä 8% +55.9% 10814957 Ä 7% numa-vmstat.node0.numa_hit
> > 6860210 Ä 8% +56.6% 10739902 Ä 7% numa-vmstat.node0.numa_local
> > 425559 Ä 13% -52.9% 200300 Ä 17% numa-vmstat.node1.nr_active_anon
> > 786341 Ä 4% -76.6% 183664 Ä 7% numa-vmstat.node1.nr_file_pages
> > 483646 Ä 4% -90.8% 44606 Ä 29% numa-vmstat.node1.nr_inactive_anon
> > 485120 Ä 4% -90.5% 46130 Ä 27% numa-vmstat.node1.nr_mapped
> > 10471 Ä 6% -61.3% 4048 Ä 18% numa-vmstat.node1.nr_page_table_pages
> > 654852 Ä 5% -91.4% 56439 Ä 25% numa-vmstat.node1.nr_shmem
> > 29681 Ä 11% -20.3% 23669 Ä 3% numa-vmstat.node1.nr_slab_reclaimable
> > 425556 Ä 13% -52.9% 200359 Ä 17% numa-vmstat.node1.nr_zone_active_anon
> > 483649 Ä 4% -90.8% 44600 Ä 29% numa-vmstat.node1.nr_zone_inactive_anon
> > 10527487 Ä 5% -31.3% 7233899 Ä 6% numa-vmstat.node1.numa_hit
> > 10290625 Ä 5% -31.9% 7006050 Ä 7% numa-vmstat.node1.numa_local
> >
> >
> >
> > ***************************************************************************************************
> > lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > interrupt/gcc-7/performance/1HDD/x86_64-fedora-25/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 6684836 -33.3% 4457559 Ä 4% stress-ng.schedpolicy.ops
> > 6684766 -33.3% 4457633 Ä 4% stress-ng.schedpolicy.ops_per_sec
> > 19978129 -28.8% 14231813 Ä 16% stress-ng.time.involuntary_context_switches
> > 82.49 Ä 2% -5.2% 78.23 stress-ng.time.user_time
> > 106716 Ä 29% +40.3% 149697 Ä 2% meminfo.max_used_kB
> > 4.07 Ä 22% +1.2 5.23 Ä 5% mpstat.cpu.all.irq%
> > 2721317 Ä 10% +66.5% 4531100 Ä 22% cpuidle.POLL.time
> > 71470 Ä 18% +41.1% 100822 Ä 11% cpuidle.POLL.usage
> > 841.00 Ä 41% -50.4% 417.25 Ä 17% numa-meminfo.node0.Dirty
> > 7096 Ä 7% +25.8% 8930 Ä 9% numa-meminfo.node1.KernelStack
> > 68752 Ä 90% -45.9% 37169 Ä143% sched_debug.cfs_rq:/.runnable_weight.stddev
> > 654.93 Ä 11% +19.3% 781.09 Ä 2% sched_debug.cpu.clock_task.stddev
> > 183.06 Ä 83% -76.9% 42.20 Ä 17% iostat.sda.await.max
> > 627.47 Ä102% -96.7% 20.52 Ä 38% iostat.sda.r_await.max
> > 183.08 Ä 83% -76.9% 42.24 Ä 17% iostat.sda.w_await.max
> > 209.00 Ä 41% -50.2% 104.00 Ä 17% numa-vmstat.node0.nr_dirty
> > 209.50 Ä 41% -50.4% 104.00 Ä 17% numa-vmstat.node0.nr_zone_write_pending
> > 6792 Ä 8% +34.4% 9131 Ä 7% numa-vmstat.node1.nr_kernel_stack
> > 3.57 Ä173% +9.8 13.38 Ä 25% perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 3.57 Ä173% +9.8 13.38 Ä 25% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
> > 3.57 Ä173% +9.8 13.39 Ä 25% perf-profile.children.cycles-pp.proc_reg_read
> > 3.57 Ä173% +12.6 16.16 Ä 28% perf-profile.children.cycles-pp.seq_read
> > 7948 Ä 56% -53.1% 3730 Ä 5% softirqs.CPU25.RCU
> > 6701 Ä 33% -46.7% 3570 Ä 5% softirqs.CPU34.RCU
> > 8232 Ä 89% -60.5% 3247 softirqs.CPU50.RCU
> > 326269 Ä 16% -27.4% 236940 softirqs.RCU
> > 68066 +7.9% 73438 proc-vmstat.nr_active_anon
> > 67504 +7.8% 72783 proc-vmstat.nr_anon_pages
> > 7198 Ä 19% +34.2% 9658 Ä 2% proc-vmstat.nr_page_table_pages
> > 40664 Ä 8% +10.1% 44766 proc-vmstat.nr_slab_unreclaimable
> > 68066 +7.9% 73438 proc-vmstat.nr_zone_active_anon
> > 1980169 Ä 4% -5.3% 1875307 proc-vmstat.numa_hit
> > 1960247 Ä 4% -5.4% 1855033 proc-vmstat.numa_local
> > 956008 Ä 16% -17.8% 786247 proc-vmstat.pgfault
> > 26598 Ä 76% +301.2% 106716 Ä 45% interrupts.CPU1.RES:Rescheduling_interrupts
> > 151212 Ä 39% -67.3% 49451 Ä 57% interrupts.CPU26.RES:Rescheduling_interrupts
> > 1013586 Ä 2% -10.9% 903528 Ä 7% interrupts.CPU27.LOC:Local_timer_interrupts
> > 1000980 Ä 2% -11.4% 886740 Ä 8% interrupts.CPU31.LOC:Local_timer_interrupts
> > 1021043 Ä 3% -9.9% 919686 Ä 6% interrupts.CPU32.LOC:Local_timer_interrupts
> > 125222 Ä 51% -86.0% 17483 Ä106% interrupts.CPU33.RES:Rescheduling_interrupts
> > 1003735 Ä 2% -11.1% 891833 Ä 8% interrupts.CPU34.LOC:Local_timer_interrupts
> > 1021799 Ä 2% -13.2% 886665 Ä 8% interrupts.CPU38.LOC:Local_timer_interrupts
> > 997788 Ä 2% -13.2% 866427 Ä 10% interrupts.CPU42.LOC:Local_timer_interrupts
> > 1001618 -11.6% 885490 Ä 9% interrupts.CPU45.LOC:Local_timer_interrupts
> > 22321 Ä 58% +550.3% 145153 Ä 22% interrupts.CPU9.RES:Rescheduling_interrupts
> > 3151 Ä 53% +67.3% 5273 Ä 8% slabinfo.avc_xperms_data.active_objs
> > 3151 Ä 53% +67.3% 5273 Ä 8% slabinfo.avc_xperms_data.num_objs
> > 348.75 Ä 13% +39.8% 487.50 Ä 5% slabinfo.biovec-128.active_objs
> > 348.75 Ä 13% +39.8% 487.50 Ä 5% slabinfo.biovec-128.num_objs
> > 13422 Ä 97% +121.1% 29678 Ä 2% slabinfo.btrfs_extent_map.active_objs
> > 14638 Ä 98% +117.8% 31888 Ä 2% slabinfo.btrfs_extent_map.num_objs
> > 3835 Ä 18% +40.9% 5404 Ä 7% slabinfo.dmaengine-unmap-16.active_objs
> > 3924 Ä 18% +39.9% 5490 Ä 8% slabinfo.dmaengine-unmap-16.num_objs
> > 3482 Ä 96% +119.1% 7631 Ä 10% slabinfo.khugepaged_mm_slot.active_objs
> > 3573 Ä 96% +119.4% 7839 Ä 10% slabinfo.khugepaged_mm_slot.num_objs
> > 8629 Ä 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.active_objs
> > 8629 Ä 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.num_objs
> > 2309 Ä 57% +82.1% 4206 Ä 5% slabinfo.mnt_cache.active_objs
> > 2336 Ä 57% +80.8% 4224 Ä 5% slabinfo.mnt_cache.num_objs
> > 5320 Ä 48% +69.1% 8999 Ä 23% slabinfo.pool_workqueue.active_objs
> > 165.75 Ä 48% +69.4% 280.75 Ä 23% slabinfo.pool_workqueue.active_slabs
> > 5320 Ä 48% +69.2% 8999 Ä 23% slabinfo.pool_workqueue.num_objs
> > 165.75 Ä 48% +69.4% 280.75 Ä 23% slabinfo.pool_workqueue.num_slabs
> > 3306 Ä 15% +27.0% 4199 Ä 3% slabinfo.task_group.active_objs
> > 3333 Ä 16% +30.1% 4336 Ä 3% slabinfo.task_group.num_objs
> > 14.74 Ä 2% +1.8 16.53 Ä 2% perf-stat.i.cache-miss-rate%
> > 22459727 Ä 20% +46.7% 32955572 Ä 4% perf-stat.i.cache-misses
> > 33575 Ä 19% +68.8% 56658 Ä 13% perf-stat.i.cpu-migrations
> > 0.03 Ä 20% +0.0 0.05 Ä 8% perf-stat.i.dTLB-load-miss-rate%
> > 6351703 Ä 33% +47.2% 9352532 Ä 9% perf-stat.i.dTLB-load-misses
> > 0.45 Ä 3% -3.0% 0.44 perf-stat.i.ipc
> > 4711345 Ä 18% +43.9% 6780944 Ä 7% perf-stat.i.node-load-misses
> > 82.51 +4.5 86.97 perf-stat.i.node-store-miss-rate%
> > 2861142 Ä 31% +60.8% 4601146 Ä 5% perf-stat.i.node-store-misses
> > 0.92 Ä 6% -0.1 0.85 Ä 2% perf-stat.overall.branch-miss-rate%
> > 0.02 Ä 3% +0.0 0.02 Ä 4% perf-stat.overall.dTLB-store-miss-rate%
> > 715.05 Ä 5% +9.9% 785.50 Ä 4% perf-stat.overall.instructions-per-iTLB-miss
> > 0.44 Ä 2% -5.4% 0.42 Ä 2% perf-stat.overall.ipc
> > 79.67 +2.1 81.80 Ä 2% perf-stat.overall.node-store-miss-rate%
> > 22237897 Ä 19% +46.4% 32560557 Ä 5% perf-stat.ps.cache-misses
> > 32491 Ä 18% +70.5% 55390 Ä 13% perf-stat.ps.cpu-migrations
> > 6071108 Ä 31% +45.0% 8804767 Ä 9% perf-stat.ps.dTLB-load-misses
> > 1866 Ä 98% -91.9% 150.48 Ä 2% perf-stat.ps.major-faults
> > 4593546 Ä 16% +42.4% 6541402 Ä 7% perf-stat.ps.node-load-misses
> > 2757176 Ä 29% +58.4% 4368169 Ä 5% perf-stat.ps.node-store-misses
> > 1.303e+12 Ä 3% -9.8% 1.175e+12 Ä 3% perf-stat.total.instructions
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > 1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
> > %stddev %change %stddev
> > \ | \
> > 98245522 +42.3% 1.398e+08 stress-ng.schedpolicy.ops
> > 3274860 +42.3% 4661027 stress-ng.schedpolicy.ops_per_sec
> > 3.473e+08 -9.7% 3.137e+08 stress-ng.sigq.ops
> > 11576537 -9.7% 10454846 stress-ng.sigq.ops_per_sec
> > 38097605 Ä 6% +10.3% 42011440 Ä 4% stress-ng.sigrt.ops
> > 1269646 Ä 6% +10.3% 1400024 Ä 4% stress-ng.sigrt.ops_per_sec
> > 3.628e+08 Ä 4% -21.5% 2.848e+08 Ä 10% stress-ng.time.involuntary_context_switches
> > 7040 +2.9% 7245 stress-ng.time.percent_of_cpu_this_job_got
> > 15.09 Ä 3% -13.4% 13.07 Ä 5% iostat.cpu.idle
> > 14.82 Ä 3% -2.0 12.80 Ä 5% mpstat.cpu.all.idle%
> > 3.333e+08 Ä 17% +59.9% 5.331e+08 Ä 22% cpuidle.C1.time
> > 5985148 Ä 23% +112.5% 12719679 Ä 20% cpuidle.C1E.usage
> > 14.50 Ä 3% -12.1% 12.75 Ä 6% vmstat.cpu.id
> > 1113131 Ä 2% -10.5% 996285 Ä 3% vmstat.system.cs
> > 2269 +2.4% 2324 turbostat.Avg_MHz
> > 0.64 Ä 17% +0.4 1.02 Ä 23% turbostat.C1%
> > 5984799 Ä 23% +112.5% 12719086 Ä 20% turbostat.C1E
> > 4.17 Ä 32% -46.0% 2.25 Ä 38% turbostat.Pkg%pc2
> > 216.57 +2.1% 221.12 turbostat.PkgWatt
> > 13.33 Ä 3% +3.9% 13.84 turbostat.RAMWatt
> > 99920 +13.6% 113486 Ä 15% proc-vmstat.nr_active_anon
> > 5738 +1.2% 5806 proc-vmstat.nr_inactive_anon
> > 46788 +2.1% 47749 proc-vmstat.nr_slab_unreclaimable
> > 99920 +13.6% 113486 Ä 15% proc-vmstat.nr_zone_active_anon
> > 5738 +1.2% 5806 proc-vmstat.nr_zone_inactive_anon
> > 3150 Ä 2% +35.4% 4265 Ä 33% proc-vmstat.numa_huge_pte_updates
> > 1641223 +34.3% 2203844 Ä 32% proc-vmstat.numa_pte_updates
> > 13575 Ä 18% +62.1% 21999 Ä 4% slabinfo.ext4_extent_status.active_objs
> > 13954 Ä 17% +57.7% 21999 Ä 4% slabinfo.ext4_extent_status.num_objs
> > 2527 Ä 4% +9.8% 2774 Ä 2% slabinfo.khugepaged_mm_slot.active_objs
> > 2527 Ä 4% +9.8% 2774 Ä 2% slabinfo.khugepaged_mm_slot.num_objs
> > 57547 Ä 8% -15.3% 48743 Ä 9% slabinfo.kmalloc-rcl-64.active_objs
> > 898.75 Ä 8% -15.3% 761.00 Ä 9% slabinfo.kmalloc-rcl-64.active_slabs
> > 57547 Ä 8% -15.3% 48743 Ä 9% slabinfo.kmalloc-rcl-64.num_objs
> > 898.75 Ä 8% -15.3% 761.00 Ä 9% slabinfo.kmalloc-rcl-64.num_slabs
> > 1.014e+10 +1.7% 1.031e+10 perf-stat.i.branch-instructions
> > 13.37 Ä 4% +2.0 15.33 Ä 3% perf-stat.i.cache-miss-rate%
> > 1.965e+11 +2.6% 2.015e+11 perf-stat.i.cpu-cycles
> > 20057708 Ä 4% +13.9% 22841468 Ä 4% perf-stat.i.iTLB-loads
> > 4.973e+10 +1.4% 5.042e+10 perf-stat.i.instructions
> > 3272 Ä 2% +2.9% 3366 perf-stat.i.minor-faults
> > 4500892 Ä 3% +18.9% 5351518 Ä 6% perf-stat.i.node-store-misses
> > 3.91 +1.3% 3.96 perf-stat.overall.cpi
> > 69.62 -1.5 68.11 perf-stat.overall.iTLB-load-miss-rate%
> > 1.047e+10 +1.3% 1.061e+10 perf-stat.ps.branch-instructions
> > 1117454 Ä 2% -10.6% 999467 Ä 3% perf-stat.ps.context-switches
> > 1.986e+11 +2.4% 2.033e+11 perf-stat.ps.cpu-cycles
> > 19614413 Ä 4% +13.6% 22288555 Ä 4% perf-stat.ps.iTLB-loads
> > 3493 -1.1% 3453 perf-stat.ps.minor-faults
> > 4546636 Ä 3% +17.0% 5321658 Ä 5% perf-stat.ps.node-store-misses
> > 0.64 Ä 3% -0.2 0.44 Ä 57% perf-profile.calltrace.cycles-pp.common_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.66 Ä 3% -0.1 0.58 Ä 7% perf-profile.children.cycles-pp.common_timer_get
> > 0.44 Ä 4% -0.1 0.39 Ä 5% perf-profile.children.cycles-pp.posix_ktime_get_ts
> > 0.39 Ä 5% -0.0 0.34 Ä 6% perf-profile.children.cycles-pp.ktime_get_ts64
> > 0.07 Ä 17% +0.0 0.10 Ä 8% perf-profile.children.cycles-pp.task_tick_fair
> > 0.08 Ä 15% +0.0 0.11 Ä 7% perf-profile.children.cycles-pp.scheduler_tick
> > 0.46 Ä 5% +0.1 0.54 Ä 6% perf-profile.children.cycles-pp.__might_sleep
> > 0.69 Ä 8% +0.2 0.85 Ä 12% perf-profile.children.cycles-pp.___might_sleep
> > 0.90 Ä 5% -0.2 0.73 Ä 9% perf-profile.self.cycles-pp.__might_fault
> > 0.40 Ä 6% -0.1 0.33 Ä 9% perf-profile.self.cycles-pp.do_timer_gettime
> > 0.50 Ä 4% -0.1 0.45 Ä 7% perf-profile.self.cycles-pp.put_itimerspec64
> > 0.32 Ä 2% -0.0 0.27 Ä 9% perf-profile.self.cycles-pp.update_curr_fair
> > 0.20 Ä 6% -0.0 0.18 Ä 2% perf-profile.self.cycles-pp.ktime_get_ts64
> > 0.08 Ä 23% +0.0 0.12 Ä 8% perf-profile.self.cycles-pp._raw_spin_trylock
> > 0.42 Ä 5% +0.1 0.50 Ä 6% perf-profile.self.cycles-pp.__might_sleep
> > 0.66 Ä 9% +0.2 0.82 Ä 12% perf-profile.self.cycles-pp.___might_sleep
> > 47297 Ä 13% +19.7% 56608 Ä 5% softirqs.CPU13.SCHED
> > 47070 Ä 3% +20.5% 56735 Ä 7% softirqs.CPU2.SCHED
> > 55443 Ä 9% -20.2% 44250 Ä 2% softirqs.CPU28.SCHED
> > 56633 Ä 3% -12.6% 49520 Ä 7% softirqs.CPU34.SCHED
> > 56599 Ä 11% -18.0% 46384 Ä 2% softirqs.CPU36.SCHED
> > 56909 Ä 9% -18.4% 46438 Ä 6% softirqs.CPU40.SCHED
> > 45062 Ä 9% +28.1% 57709 Ä 9% softirqs.CPU45.SCHED
> > 43959 +28.7% 56593 Ä 9% softirqs.CPU49.SCHED
> > 46235 Ä 10% +22.2% 56506 Ä 11% softirqs.CPU5.SCHED
> > 44779 Ä 12% +22.5% 54859 Ä 11% softirqs.CPU57.SCHED
> > 46739 Ä 10% +21.1% 56579 Ä 8% softirqs.CPU6.SCHED
> > 53129 Ä 4% -13.1% 46149 Ä 8% softirqs.CPU70.SCHED
> > 55822 Ä 7% -20.5% 44389 Ä 8% softirqs.CPU73.SCHED
> > 56011 Ä 5% -11.4% 49610 Ä 7% softirqs.CPU77.SCHED
> > 55263 Ä 9% -13.2% 47942 Ä 12% softirqs.CPU78.SCHED
> > 58792 Ä 14% -21.3% 46291 Ä 9% softirqs.CPU81.SCHED
> > 53341 Ä 7% -13.7% 46041 Ä 10% softirqs.CPU83.SCHED
> > 59096 Ä 15% -23.9% 44998 Ä 6% softirqs.CPU85.SCHED
> > 36647 -98.5% 543.00 Ä 61% numa-meminfo.node0.Active(file)
> > 620922 Ä 4% -10.4% 556566 Ä 5% numa-meminfo.node0.FilePages
> > 21243 Ä 3% -36.2% 13543 Ä 41% numa-meminfo.node0.Inactive
> > 20802 Ä 3% -35.3% 13455 Ä 42% numa-meminfo.node0.Inactive(anon)
> > 15374 Ä 9% -27.2% 11193 Ä 8% numa-meminfo.node0.KernelStack
> > 21573 -34.7% 14084 Ä 14% numa-meminfo.node0.Mapped
> > 1136795 Ä 5% -12.4% 995965 Ä 6% numa-meminfo.node0.MemUsed
> > 16420 Ä 6% -66.0% 5580 Ä 18% numa-meminfo.node0.PageTables
> > 108182 Ä 2% -18.5% 88150 Ä 3% numa-meminfo.node0.SUnreclaim
> > 166467 Ä 2% -15.8% 140184 Ä 4% numa-meminfo.node0.Slab
> > 181705 Ä 36% +63.8% 297623 Ä 10% numa-meminfo.node1.Active
> > 320.75 Ä 27% +11187.0% 36203 numa-meminfo.node1.Active(file)
> > 2208 Ä 38% +362.1% 10207 Ä 54% numa-meminfo.node1.Inactive
> > 2150 Ä 39% +356.0% 9804 Ä 58% numa-meminfo.node1.Inactive(anon)
> > 41819 Ä 10% +17.3% 49068 Ä 6% numa-meminfo.node1.KReclaimable
> > 11711 Ä 5% +47.2% 17238 Ä 22% numa-meminfo.node1.KernelStack
> > 10642 +68.3% 17911 Ä 11% numa-meminfo.node1.Mapped
> > 952520 Ä 6% +20.3% 1146337 Ä 3% numa-meminfo.node1.MemUsed
> > 12342 Ä 15% +92.4% 23741 Ä 9% numa-meminfo.node1.PageTables
> > 41819 Ä 10% +17.3% 49068 Ä 6% numa-meminfo.node1.SReclaimable
> > 80394 Ä 3% +27.1% 102206 Ä 3% numa-meminfo.node1.SUnreclaim
> > 122214 Ä 3% +23.8% 151275 Ä 3% numa-meminfo.node1.Slab
> > 9160 -98.5% 135.25 Ä 61% numa-vmstat.node0.nr_active_file
> > 155223 Ä 4% -10.4% 139122 Ä 5% numa-vmstat.node0.nr_file_pages
> > 5202 Ä 3% -35.4% 3362 Ä 42% numa-vmstat.node0.nr_inactive_anon
> > 109.50 Ä 14% -80.1% 21.75 Ä160% numa-vmstat.node0.nr_inactive_file
> > 14757 Ä 3% -34.4% 9676 Ä 12% numa-vmstat.node0.nr_kernel_stack
> > 5455 -34.9% 3549 Ä 12% numa-vmstat.node0.nr_mapped
> > 4069 Ä 6% -68.3% 1289 Ä 24% numa-vmstat.node0.nr_page_table_pages
> > 26943 Ä 2% -19.2% 21761 Ä 3% numa-vmstat.node0.nr_slab_unreclaimable
> > 2240 Ä 6% -97.8% 49.00 Ä 69% numa-vmstat.node0.nr_written
> > 9160 -98.5% 135.25 Ä 61% numa-vmstat.node0.nr_zone_active_file
> > 5202 Ä 3% -35.4% 3362 Ä 42% numa-vmstat.node0.nr_zone_inactive_anon
> > 109.50 Ä 14% -80.1% 21.75 Ä160% numa-vmstat.node0.nr_zone_inactive_file
> > 79.75 Ä 28% +11247.0% 9049 numa-vmstat.node1.nr_active_file
> > 542.25 Ä 41% +352.1% 2451 Ä 58% numa-vmstat.node1.nr_inactive_anon
> > 14.00 Ä140% +617.9% 100.50 Ä 35% numa-vmstat.node1.nr_inactive_file
> > 11182 Ä 4% +28.9% 14415 Ä 4% numa-vmstat.node1.nr_kernel_stack
> > 2728 Ä 3% +67.7% 4576 Ä 9% numa-vmstat.node1.nr_mapped
> > 3056 Ä 15% +88.2% 5754 Ä 8% numa-vmstat.node1.nr_page_table_pages
> > 10454 Ä 10% +17.3% 12262 Ä 7% numa-vmstat.node1.nr_slab_reclaimable
> > 20006 Ä 3% +25.0% 25016 Ä 3% numa-vmstat.node1.nr_slab_unreclaimable
> > 19.00 Ä 52% +11859.2% 2272 Ä 2% numa-vmstat.node1.nr_written
> > 79.75 Ä 28% +11247.0% 9049 numa-vmstat.node1.nr_zone_active_file
> > 542.25 Ä 41% +352.1% 2451 Ä 58% numa-vmstat.node1.nr_zone_inactive_anon
> > 14.00 Ä140% +617.9% 100.50 Ä 35% numa-vmstat.node1.nr_zone_inactive_file
> > 173580 Ä 21% +349.5% 780280 Ä 7% sched_debug.cfs_rq:/.MIN_vruntime.avg
> > 6891819 Ä 37% +109.1% 14412817 Ä 9% sched_debug.cfs_rq:/.MIN_vruntime.max
> > 1031500 Ä 25% +189.1% 2982452 Ä 8% sched_debug.cfs_rq:/.MIN_vruntime.stddev
> > 149079 +13.6% 169354 Ä 2% sched_debug.cfs_rq:/.exec_clock.min
> > 8550 Ä 3% -59.7% 3442 Ä 32% sched_debug.cfs_rq:/.exec_clock.stddev
> > 4.95 Ä 6% -15.2% 4.20 Ä 10% sched_debug.cfs_rq:/.load_avg.min
> > 173580 Ä 21% +349.5% 780280 Ä 7% sched_debug.cfs_rq:/.max_vruntime.avg
> > 6891819 Ä 37% +109.1% 14412817 Ä 9% sched_debug.cfs_rq:/.max_vruntime.max
> > 1031500 Ä 25% +189.1% 2982452 Ä 8% sched_debug.cfs_rq:/.max_vruntime.stddev
> > 16144141 +27.9% 20645199 Ä 6% sched_debug.cfs_rq:/.min_vruntime.avg
> > 17660392 +27.7% 22546402 Ä 4% sched_debug.cfs_rq:/.min_vruntime.max
> > 13747718 +36.8% 18802595 Ä 5% sched_debug.cfs_rq:/.min_vruntime.min
> > 0.17 Ä 11% +35.0% 0.22 Ä 15% sched_debug.cfs_rq:/.nr_running.stddev
> > 10.64 Ä 14% -26.4% 7.83 Ä 12% sched_debug.cpu.clock.stddev
> > 10.64 Ä 14% -26.4% 7.83 Ä 12% sched_debug.cpu.clock_task.stddev
> > 7093 Ä 42% -65.9% 2420 Ä120% sched_debug.cpu.curr->pid.min
> > 2434979 Ä 2% -18.6% 1981697 Ä 3% sched_debug.cpu.nr_switches.avg
> > 3993189 Ä 6% -22.2% 3104832 Ä 5% sched_debug.cpu.nr_switches.max
> > -145.03 -42.8% -82.90 sched_debug.cpu.nr_uninterruptible.min
> > 2097122 Ä 6% +38.7% 2908923 Ä 6% sched_debug.cpu.sched_count.min
> > 809684 Ä 13% -30.5% 562929 Ä 17% sched_debug.cpu.sched_count.stddev
> > 307565 Ä 4% -15.1% 261231 Ä 3% sched_debug.cpu.ttwu_count.min
> > 207286 Ä 6% -16.4% 173387 Ä 3% sched_debug.cpu.ttwu_local.min
> > 125963 Ä 23% +53.1% 192849 Ä 2% sched_debug.cpu.ttwu_local.stddev
> > 2527246 +10.8% 2800959 Ä 3% sched_debug.cpu.yld_count.avg
> > 1294266 Ä 4% +53.7% 1989264 Ä 2% sched_debug.cpu.yld_count.min
> > 621332 Ä 9% -38.4% 382813 Ä 22% sched_debug.cpu.yld_count.stddev
> > 899.50 Ä 28% -48.2% 465.75 Ä 42% interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
> > 372.50 Ä 7% +169.5% 1004 Ä 40% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
> > 6201 Ä 8% +17.9% 7309 Ä 3% interrupts.CPU0.CAL:Function_call_interrupts
> > 653368 Ä 47% +159.4% 1695029 Ä 17% interrupts.CPU0.RES:Rescheduling_interrupts
> > 7104 Ä 7% +13.6% 8067 interrupts.CPU1.CAL:Function_call_interrupts
> > 2094 Ä 59% +89.1% 3962 Ä 10% interrupts.CPU10.TLB:TLB_shootdowns
> > 7309 Ä 8% +11.2% 8125 interrupts.CPU11.CAL:Function_call_interrupts
> > 2089 Ä 62% +86.2% 3890 Ä 11% interrupts.CPU13.TLB:TLB_shootdowns
> > 7068 Ä 8% +15.2% 8144 Ä 2% interrupts.CPU14.CAL:Function_call_interrupts
> > 7112 Ä 7% +13.6% 8079 Ä 3% interrupts.CPU15.CAL:Function_call_interrupts
> > 1950 Ä 61% +103.5% 3968 Ä 11% interrupts.CPU15.TLB:TLB_shootdowns
> > 899.50 Ä 28% -48.2% 465.75 Ä 42% interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
> > 2252 Ä 47% +62.6% 3664 Ä 15% interrupts.CPU16.TLB:TLB_shootdowns
> > 7111 Ä 8% +14.8% 8167 Ä 3% interrupts.CPU18.CAL:Function_call_interrupts
> > 1972 Ä 60% +96.3% 3872 Ä 9% interrupts.CPU18.TLB:TLB_shootdowns
> > 372.50 Ä 7% +169.5% 1004 Ä 40% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
> > 2942 Ä 12% -57.5% 1251 Ä 22% interrupts.CPU22.TLB:TLB_shootdowns
> > 7819 -12.2% 6861 Ä 3% interrupts.CPU23.CAL:Function_call_interrupts
> > 3327 Ä 12% -62.7% 1241 Ä 29% interrupts.CPU23.TLB:TLB_shootdowns
> > 7767 Ä 3% -14.0% 6683 Ä 5% interrupts.CPU24.CAL:Function_call_interrupts
> > 3185 Ä 21% -63.8% 1154 Ä 14% interrupts.CPU24.TLB:TLB_shootdowns
> > 7679 Ä 4% -11.3% 6812 Ä 2% interrupts.CPU25.CAL:Function_call_interrupts
> > 3004 Ä 28% -63.4% 1100 Ä 7% interrupts.CPU25.TLB:TLB_shootdowns
> > 3187 Ä 17% -61.3% 1232 Ä 35% interrupts.CPU26.TLB:TLB_shootdowns
> > 3193 Ä 16% -59.3% 1299 Ä 34% interrupts.CPU27.TLB:TLB_shootdowns
> > 3059 Ä 21% -58.0% 1285 Ä 32% interrupts.CPU28.TLB:TLB_shootdowns
> > 7798 Ä 4% -13.8% 6719 Ä 7% interrupts.CPU29.CAL:Function_call_interrupts
> > 3122 Ä 20% -62.3% 1178 Ä 37% interrupts.CPU29.TLB:TLB_shootdowns
> > 7727 Ä 2% -11.6% 6827 Ä 5% interrupts.CPU30.CAL:Function_call_interrupts
> > 3102 Ä 18% -59.4% 1259 Ä 33% interrupts.CPU30.TLB:TLB_shootdowns
> > 3269 Ä 24% -58.1% 1371 Ä 48% interrupts.CPU31.TLB:TLB_shootdowns
> > 7918 Ä 3% -14.5% 6771 interrupts.CPU32.CAL:Function_call_interrupts
> > 3324 Ä 18% -70.7% 973.50 Ä 18% interrupts.CPU32.TLB:TLB_shootdowns
> > 2817 Ä 27% -60.2% 1121 Ä 26% interrupts.CPU33.TLB:TLB_shootdowns
> > 7956 Ä 3% -11.8% 7018 Ä 4% interrupts.CPU34.CAL:Function_call_interrupts
> > 3426 Ä 21% -70.3% 1018 Ä 29% interrupts.CPU34.TLB:TLB_shootdowns
> > 3121 Ä 17% -70.3% 926.75 Ä 22% interrupts.CPU35.TLB:TLB_shootdowns
> > 7596 Ä 4% -10.6% 6793 Ä 3% interrupts.CPU36.CAL:Function_call_interrupts
> > 2900 Ä 30% -62.3% 1094 Ä 34% interrupts.CPU36.TLB:TLB_shootdowns
> > 7863 -13.1% 6833 Ä 2% interrupts.CPU37.CAL:Function_call_interrupts
> > 3259 Ä 15% -65.9% 1111 Ä 20% interrupts.CPU37.TLB:TLB_shootdowns
> > 3230 Ä 26% -64.0% 1163 Ä 39% interrupts.CPU38.TLB:TLB_shootdowns
> > 7728 Ä 5% -13.8% 6662 Ä 7% interrupts.CPU39.CAL:Function_call_interrupts
> > 2950 Ä 29% -61.6% 1133 Ä 26% interrupts.CPU39.TLB:TLB_shootdowns
> > 6864 Ä 3% +18.7% 8147 interrupts.CPU4.CAL:Function_call_interrupts
> > 1847 Ä 59% +118.7% 4039 Ä 7% interrupts.CPU4.TLB:TLB_shootdowns
> > 7951 Ä 6% -15.0% 6760 Ä 2% interrupts.CPU40.CAL:Function_call_interrupts
> > 3200 Ä 30% -72.3% 886.50 Ä 39% interrupts.CPU40.TLB:TLB_shootdowns
> > 7819 Ä 6% -11.3% 6933 Ä 2% interrupts.CPU41.CAL:Function_call_interrupts
> > 3149 Ä 28% -62.9% 1169 Ä 24% interrupts.CPU41.TLB:TLB_shootdowns
> > 7884 Ä 4% -11.0% 7019 Ä 2% interrupts.CPU42.CAL:Function_call_interrupts
> > 3248 Ä 16% -63.4% 1190 Ä 23% interrupts.CPU42.TLB:TLB_shootdowns
> > 7659 Ä 5% -12.7% 6690 Ä 3% interrupts.CPU43.CAL:Function_call_interrupts
> > 490732 Ä 20% +114.5% 1052606 Ä 47% interrupts.CPU43.RES:Rescheduling_interrupts
> > 1432688 Ä 34% -67.4% 467217 Ä 43% interrupts.CPU47.RES:Rescheduling_interrupts
> > 7122 Ä 8% +16.0% 8259 Ä 3% interrupts.CPU48.CAL:Function_call_interrupts
> > 1868 Ä 65% +118.4% 4079 Ä 8% interrupts.CPU48.TLB:TLB_shootdowns
> > 7165 Ä 8% +11.3% 7977 Ä 5% interrupts.CPU49.CAL:Function_call_interrupts
> > 1961 Ä 59% +98.4% 3891 Ä 4% interrupts.CPU49.TLB:TLB_shootdowns
> > 461807 Ä 47% +190.8% 1342990 Ä 48% interrupts.CPU5.RES:Rescheduling_interrupts
> > 7167 Ä 7% +15.4% 8273 interrupts.CPU50.CAL:Function_call_interrupts
> > 2027 Ä 51% +103.9% 4134 Ä 8% interrupts.CPU50.TLB:TLB_shootdowns
> > 7163 Ä 9% +16.3% 8328 interrupts.CPU51.CAL:Function_call_interrupts
> > 660073 Ä 33% +74.0% 1148640 Ä 25% interrupts.CPU51.RES:Rescheduling_interrupts
> > 2043 Ä 64% +95.8% 4000 Ä 5% interrupts.CPU51.TLB:TLB_shootdowns
> > 7428 Ä 9% +13.5% 8434 Ä 2% interrupts.CPU52.CAL:Function_call_interrupts
> > 2280 Ä 61% +85.8% 4236 Ä 9% interrupts.CPU52.TLB:TLB_shootdowns
> > 7144 Ä 11% +17.8% 8413 interrupts.CPU53.CAL:Function_call_interrupts
> > 1967 Ä 67% +104.7% 4026 Ä 5% interrupts.CPU53.TLB:TLB_shootdowns
> > 7264 Ä 10% +15.6% 8394 Ä 4% interrupts.CPU54.CAL:Function_call_interrupts
> > 7045 Ä 11% +18.7% 8365 Ä 2% interrupts.CPU56.CAL:Function_call_interrupts
> > 2109 Ä 59% +91.6% 4041 Ä 10% interrupts.CPU56.TLB:TLB_shootdowns
> > 7307 Ä 9% +15.3% 8428 Ä 2% interrupts.CPU57.CAL:Function_call_interrupts
> > 2078 Ä 64% +96.5% 4085 Ä 6% interrupts.CPU57.TLB:TLB_shootdowns
> > 6834 Ä 12% +19.8% 8190 Ä 3% interrupts.CPU58.CAL:Function_call_interrupts
> > 612496 Ä 85% +122.5% 1362815 Ä 27% interrupts.CPU58.RES:Rescheduling_interrupts
> > 1884 Ä 69% +112.0% 3995 Ä 8% interrupts.CPU58.TLB:TLB_shootdowns
> > 7185 Ä 8% +15.9% 8329 interrupts.CPU59.CAL:Function_call_interrupts
> > 1982 Ä 58% +101.1% 3986 Ä 5% interrupts.CPU59.TLB:TLB_shootdowns
> > 7051 Ä 6% +13.1% 7975 interrupts.CPU6.CAL:Function_call_interrupts
> > 1831 Ä 49% +102.1% 3701 Ä 8% interrupts.CPU6.TLB:TLB_shootdowns
> > 7356 Ä 8% +16.2% 8548 interrupts.CPU60.CAL:Function_call_interrupts
> > 2124 Ä 57% +92.8% 4096 Ä 5% interrupts.CPU60.TLB:TLB_shootdowns
> > 7243 Ä 9% +15.1% 8334 interrupts.CPU61.CAL:Function_call_interrupts
> > 572423 Ä 71% +110.0% 1201919 Ä 40% interrupts.CPU61.RES:Rescheduling_interrupts
> > 7295 Ä 9% +14.7% 8369 interrupts.CPU63.CAL:Function_call_interrupts
> > 2139 Ä 57% +85.7% 3971 Ä 3% interrupts.CPU63.TLB:TLB_shootdowns
> > 7964 Ä 2% -15.6% 6726 Ä 5% interrupts.CPU66.CAL:Function_call_interrupts
> > 3198 Ä 21% -65.0% 1119 Ä 24% interrupts.CPU66.TLB:TLB_shootdowns
> > 8103 Ä 2% -17.5% 6687 Ä 9% interrupts.CPU67.CAL:Function_call_interrupts
> > 3357 Ä 18% -62.9% 1244 Ä 32% interrupts.CPU67.TLB:TLB_shootdowns
> > 7772 Ä 2% -14.0% 6687 Ä 8% interrupts.CPU68.CAL:Function_call_interrupts
> > 2983 Ä 17% -59.2% 1217 Ä 15% interrupts.CPU68.TLB:TLB_shootdowns
> > 7986 Ä 4% -13.8% 6887 Ä 4% interrupts.CPU69.CAL:Function_call_interrupts
> > 3192 Ä 24% -65.0% 1117 Ä 30% interrupts.CPU69.TLB:TLB_shootdowns
> > 7070 Ä 6% +14.6% 8100 Ä 2% interrupts.CPU7.CAL:Function_call_interrupts
> > 697891 Ä 32% +54.4% 1077890 Ä 18% interrupts.CPU7.RES:Rescheduling_interrupts
> > 1998 Ä 55% +97.1% 3938 Ä 10% interrupts.CPU7.TLB:TLB_shootdowns
> > 8085 -13.4% 7002 Ä 3% interrupts.CPU70.CAL:Function_call_interrupts
> > 1064985 Ä 35% -62.5% 398986 Ä 29% interrupts.CPU70.RES:Rescheduling_interrupts
> > 3347 Ä 12% -61.7% 1280 Ä 24% interrupts.CPU70.TLB:TLB_shootdowns
> > 2916 Ä 16% -58.8% 1201 Ä 39% interrupts.CPU71.TLB:TLB_shootdowns
> > 3314 Ä 19% -61.3% 1281 Ä 26% interrupts.CPU72.TLB:TLB_shootdowns
> > 3119 Ä 18% -61.5% 1200 Ä 39% interrupts.CPU73.TLB:TLB_shootdowns
> > 7992 Ä 4% -12.6% 6984 Ä 3% interrupts.CPU74.CAL:Function_call_interrupts
> > 3187 Ä 21% -56.8% 1378 Ä 40% interrupts.CPU74.TLB:TLB_shootdowns
> > 7953 Ä 4% -12.0% 6999 Ä 4% interrupts.CPU75.CAL:Function_call_interrupts
> > 3072 Ä 26% -56.8% 1327 Ä 34% interrupts.CPU75.TLB:TLB_shootdowns
> > 8119 Ä 5% -12.4% 7109 Ä 7% interrupts.CPU76.CAL:Function_call_interrupts
> > 3418 Ä 20% -67.5% 1111 Ä 31% interrupts.CPU76.TLB:TLB_shootdowns
> > 7804 Ä 5% -11.4% 6916 Ä 4% interrupts.CPU77.CAL:Function_call_interrupts
> > 7976 Ä 5% -14.4% 6826 Ä 3% interrupts.CPU78.CAL:Function_call_interrupts
> > 3209 Ä 27% -71.8% 904.75 Ä 28% interrupts.CPU78.TLB:TLB_shootdowns
> > 8187 Ä 4% -14.6% 6991 Ä 3% interrupts.CPU79.CAL:Function_call_interrupts
> > 3458 Ä 20% -67.5% 1125 Ä 36% interrupts.CPU79.TLB:TLB_shootdowns
> > 7122 Ä 7% +14.2% 8136 Ä 2% interrupts.CPU8.CAL:Function_call_interrupts
> > 2096 Ä 63% +87.4% 3928 Ä 8% interrupts.CPU8.TLB:TLB_shootdowns
> > 8130 Ä 5% -17.2% 6728 Ä 5% interrupts.CPU81.CAL:Function_call_interrupts
> > 3253 Ä 24% -70.6% 955.00 Ä 38% interrupts.CPU81.TLB:TLB_shootdowns
> > 7940 Ä 5% -13.9% 6839 Ä 5% interrupts.CPU82.CAL:Function_call_interrupts
> > 2952 Ä 26% -66.3% 996.00 Ä 51% interrupts.CPU82.TLB:TLB_shootdowns
> > 7900 Ä 6% -13.4% 6844 Ä 3% interrupts.CPU83.CAL:Function_call_interrupts
> > 3012 Ä 34% -68.3% 956.00 Ä 17% interrupts.CPU83.TLB:TLB_shootdowns
> > 7952 Ä 6% -15.8% 6695 Ä 2% interrupts.CPU84.CAL:Function_call_interrupts
> > 3049 Ä 31% -75.5% 746.50 Ä 27% interrupts.CPU84.TLB:TLB_shootdowns
> > 8065 Ä 6% -15.7% 6798 interrupts.CPU85.CAL:Function_call_interrupts
> > 3222 Ä 23% -69.7% 976.00 Ä 13% interrupts.CPU85.TLB:TLB_shootdowns
> > 8049 Ä 5% -13.2% 6983 Ä 4% interrupts.CPU86.CAL:Function_call_interrupts
> > 3159 Ä 19% -61.9% 1202 Ä 27% interrupts.CPU86.TLB:TLB_shootdowns
> > 8154 Ä 8% -16.9% 6773 Ä 3% interrupts.CPU87.CAL:Function_call_interrupts
> > 1432962 Ä 21% -48.5% 737989 Ä 30% interrupts.CPU87.RES:Rescheduling_interrupts
> > 3186 Ä 33% -72.3% 881.75 Ä 21% interrupts.CPU87.TLB:TLB_shootdowns
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/1s/0xb000038
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 3345449 +35.1% 4518187 Ä 5% stress-ng.schedpolicy.ops
> > 3347036 +35.1% 4520740 Ä 5% stress-ng.schedpolicy.ops_per_sec
> > 11464910 Ä 6% -23.3% 8796455 Ä 11% stress-ng.sigq.ops
> > 11452565 Ä 6% -23.3% 8786844 Ä 11% stress-ng.sigq.ops_per_sec
> > 228736 +20.7% 276087 Ä 20% stress-ng.sleep.ops
> > 157479 +23.0% 193722 Ä 21% stress-ng.sleep.ops_per_sec
> > 14584704 -5.8% 13744640 Ä 4% stress-ng.timerfd.ops
> > 14546032 -5.7% 13718862 Ä 4% stress-ng.timerfd.ops_per_sec
> > 27.24 Ä105% +283.9% 104.58 Ä109% iostat.sdb.r_await.max
> > 122324 Ä 35% +63.9% 200505 Ä 21% meminfo.AnonHugePages
> > 47267 Ä 26% +155.2% 120638 Ä 45% numa-meminfo.node1.AnonHugePages
> > 22880 Ä 6% -9.9% 20605 Ä 3% softirqs.CPU57.TIMER
> > 636196 Ä 24% +38.5% 880847 Ä 7% cpuidle.C1.usage
> > 55936214 Ä 20% +63.9% 91684673 Ä 18% cpuidle.C1E.time
> > 1.175e+08 Ä 22% +101.8% 2.372e+08 Ä 29% cpuidle.C3.time
> > 4.242e+08 Ä 6% -39.1% 2.584e+08 Ä 39% cpuidle.C6.time
> > 59.50 Ä 34% +66.0% 98.75 Ä 22% proc-vmstat.nr_anon_transparent_hugepages
> > 25612 Ä 10% +13.8% 29146 Ä 4% proc-vmstat.nr_kernel_stack
> > 2783465 Ä 9% +14.5% 3187157 Ä 9% proc-vmstat.pgalloc_normal
> > 1743 Ä 28% +43.8% 2507 Ä 23% proc-vmstat.thp_deferred_split_page
> > 1765 Ä 30% +43.2% 2529 Ä 22% proc-vmstat.thp_fault_alloc
> > 811.00 Ä 3% -13.8% 699.00 Ä 7% slabinfo.kmem_cache_node.active_objs
> > 864.00 Ä 3% -13.0% 752.00 Ä 7% slabinfo.kmem_cache_node.num_objs
> > 8686 Ä 7% +13.6% 9869 Ä 3% slabinfo.pid.active_objs
> > 8690 Ä 7% +13.8% 9890 Ä 3% slabinfo.pid.num_objs
> > 9813 Ä 6% +15.7% 11352 Ä 3% slabinfo.task_delay_info.active_objs
> > 9813 Ä 6% +15.7% 11352 Ä 3% slabinfo.task_delay_info.num_objs
> > 79.22 Ä 10% -41.1% 46.68 Ä 22% sched_debug.cfs_rq:/.load_avg.avg
> > 242.49 Ä 6% -29.6% 170.70 Ä 17% sched_debug.cfs_rq:/.load_avg.stddev
> > 43.14 Ä 29% -67.1% 14.18 Ä 66% sched_debug.cfs_rq:/.removed.load_avg.avg
> > 201.73 Ä 15% -50.1% 100.68 Ä 60% sched_debug.cfs_rq:/.removed.load_avg.stddev
> > 1987 Ä 28% -67.3% 650.09 Ä 66% sched_debug.cfs_rq:/.removed.runnable_sum.avg
> > 9298 Ä 15% -50.3% 4616 Ä 60% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> > 18.17 Ä 27% -68.6% 5.70 Ä 63% sched_debug.cfs_rq:/.removed.util_avg.avg
> > 87.61 Ä 13% -52.6% 41.48 Ä 59% sched_debug.cfs_rq:/.removed.util_avg.stddev
> > 633327 Ä 24% +38.4% 876596 Ä 7% turbostat.C1
> > 2.75 Ä 22% +1.8 4.52 Ä 17% turbostat.C1E%
> > 5.76 Ä 22% +6.1 11.82 Ä 30% turbostat.C3%
> > 20.69 Ä 5% -8.1 12.63 Ä 38% turbostat.C6%
> > 15.62 Ä 6% +18.4% 18.50 Ä 8% turbostat.CPU%c1
> > 1.56 Ä 16% +208.5% 4.82 Ä 38% turbostat.CPU%c3
> > 12.81 Ä 4% -48.1% 6.65 Ä 43% turbostat.CPU%c6
> > 5.02 Ä 8% -34.6% 3.28 Ä 14% turbostat.Pkg%pc2
> > 0.85 Ä 57% -84.7% 0.13 Ä173% turbostat.Pkg%pc6
> > 88.25 Ä 13% +262.6% 320.00 Ä 71% interrupts.CPU10.TLB:TLB_shootdowns
> > 116.25 Ä 36% +151.6% 292.50 Ä 68% interrupts.CPU19.TLB:TLB_shootdowns
> > 109.25 Ä 8% +217.4% 346.75 Ä106% interrupts.CPU2.TLB:TLB_shootdowns
> > 15180 Ä111% +303.9% 61314 Ä 32% interrupts.CPU23.RES:Rescheduling_interrupts
> > 111.50 Ä 26% +210.3% 346.00 Ä 79% interrupts.CPU3.TLB:TLB_shootdowns
> > 86.50 Ä 35% +413.0% 443.75 Ä 66% interrupts.CPU33.TLB:TLB_shootdowns
> > 728.00 Ä 8% +29.6% 943.50 Ä 16% interrupts.CPU38.CAL:Function_call_interrupts
> > 1070 Ä 72% +84.9% 1979 Ä 9% interrupts.CPU54.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
> > 41429 Ä 64% -73.7% 10882 Ä 73% interrupts.CPU59.RES:Rescheduling_interrupts
> > 26330 Ä 85% -73.3% 7022 Ä 86% interrupts.CPU62.RES:Rescheduling_interrupts
> > 103.00 Ä 22% +181.3% 289.75 Ä 92% interrupts.CPU65.TLB:TLB_shootdowns
> > 100.00 Ä 40% +365.0% 465.00 Ä 71% interrupts.CPU70.TLB:TLB_shootdowns
> > 110.25 Ä 18% +308.4% 450.25 Ä 71% interrupts.CPU80.TLB:TLB_shootdowns
> > 93.50 Ä 42% +355.1% 425.50 Ä 82% interrupts.CPU84.TLB:TLB_shootdowns
> > 104.50 Ä 18% +289.7% 407.25 Ä 68% interrupts.CPU87.TLB:TLB_shootdowns
> > 1.76 Ä 3% -0.1 1.66 Ä 4% perf-stat.i.branch-miss-rate%
> > 8.08 Ä 6% +2.0 10.04 perf-stat.i.cache-miss-rate%
> > 18031213 Ä 4% +27.2% 22939937 Ä 3% perf-stat.i.cache-misses
> > 4.041e+08 -1.9% 3.965e+08 perf-stat.i.cache-references
> > 31764 Ä 26% -40.6% 18859 Ä 10% perf-stat.i.cycles-between-cache-misses
> > 66.18 -1.5 64.71 perf-stat.i.iTLB-load-miss-rate%
> > 4503482 Ä 8% +19.5% 5382698 Ä 5% perf-stat.i.node-load-misses
> > 3892859 Ä 2% +16.6% 4538750 Ä 4% perf-stat.i.node-store-misses
> > 1526815 Ä 13% +25.8% 1921178 Ä 9% perf-stat.i.node-stores
> > 4.72 Ä 4% +1.3 6.00 Ä 3% perf-stat.overall.cache-miss-rate%
> > 9120 Ä 6% -18.9% 7394 Ä 2% perf-stat.overall.cycles-between-cache-misses
> > 18237318 Ä 4% +25.4% 22866104 Ä 3% perf-stat.ps.cache-misses
> > 4392089 Ä 8% +18.1% 5189251 Ä 5% perf-stat.ps.node-load-misses
> > 1629766 Ä 2% +17.9% 1920947 Ä 13% perf-stat.ps.node-loads
> > 3694566 Ä 2% +16.1% 4288126 Ä 4% perf-stat.ps.node-store-misses
> > 1536866 Ä 12% +23.7% 1901141 Ä 7% perf-stat.ps.node-stores
> > 38.20 Ä 18% -13.2 24.96 Ä 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
> > 38.20 Ä 18% -13.2 24.96 Ä 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 7.98 Ä 67% -7.2 0.73 Ä173% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
> > 7.98 Ä 67% -7.2 0.73 Ä173% perf-profile.calltrace.cycles-pp.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
> > 7.98 Ä 67% -7.2 0.73 Ä173% perf-profile.calltrace.cycles-pp.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
> > 4.27 Ä 66% -3.5 0.73 Ä173% perf-profile.calltrace.cycles-pp.read
> > 4.05 Ä 71% -3.3 0.73 Ä173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
> > 4.05 Ä 71% -3.3 0.73 Ä173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
> > 13.30 Ä 38% -8.2 5.07 Ä 62% perf-profile.children.cycles-pp.task_work_run
> > 12.47 Ä 46% -7.4 5.07 Ä 62% perf-profile.children.cycles-pp.exit_to_usermode_loop
> > 12.47 Ä 46% -7.4 5.07 Ä 62% perf-profile.children.cycles-pp.__fput
> > 7.98 Ä 67% -7.2 0.73 Ä173% perf-profile.children.cycles-pp.perf_remove_from_context
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.children.cycles-pp.do_signal
> > 11.86 Ä 41% -6.8 5.07 Ä 62% perf-profile.children.cycles-pp.get_signal
> > 9.43 Ä 21% -4.7 4.72 Ä 67% perf-profile.children.cycles-pp.ksys_read
> > 9.43 Ä 21% -4.7 4.72 Ä 67% perf-profile.children.cycles-pp.vfs_read
> > 4.27 Ä 66% -3.5 0.73 Ä173% perf-profile.children.cycles-pp.read
> > 3.86 Ä101% -3.1 0.71 Ä173% perf-profile.children.cycles-pp._raw_spin_lock
> > 3.86 Ä101% -3.1 0.71 Ä173% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> > 3.86 Ä101% -3.1 0.71 Ä173% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
> >
> >
> >
> > ***************************************************************************************************
> > lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002b
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > :2 50% 1:8 dmesg.WARNING:at_ip_selinux_file_ioctl/0x
> > %stddev %change %stddev
> > \ | \
> > 122451 Ä 11% -19.9% 98072 Ä 15% stress-ng.ioprio.ops
> > 116979 Ä 11% -20.7% 92815 Ä 16% stress-ng.ioprio.ops_per_sec
> > 274187 Ä 21% -26.7% 201013 Ä 11% stress-ng.kill.ops
> > 274219 Ä 21% -26.7% 201040 Ä 11% stress-ng.kill.ops_per_sec
> > 3973765 -10.1% 3570462 Ä 5% stress-ng.lockf.ops
> > 3972581 -10.2% 3568935 Ä 5% stress-ng.lockf.ops_per_sec
> > 10719 Ä 8% -39.9% 6442 Ä 22% stress-ng.procfs.ops
> > 9683 Ä 3% -39.3% 5878 Ä 22% stress-ng.procfs.ops_per_sec
> > 6562721 -35.1% 4260609 Ä 8% stress-ng.schedpolicy.ops
> > 6564233 -35.1% 4261479 Ä 8% stress-ng.schedpolicy.ops_per_sec
> > 1070988 +21.4% 1299977 Ä 7% stress-ng.sigrt.ops
> > 1061773 +21.2% 1286618 Ä 7% stress-ng.sigrt.ops_per_sec
> > 1155684 Ä 5% -14.8% 984531 Ä 16% stress-ng.symlink.ops
> > 991624 Ä 4% -23.8% 755147 Ä 41% stress-ng.symlink.ops_per_sec
> > 6925 -12.1% 6086 Ä 27% stress-ng.time.percent_of_cpu_this_job_got
> > 24.68 +9.3 33.96 Ä 52% mpstat.cpu.all.idle%
> > 171.00 Ä 2% -55.3% 76.50 Ä 60% numa-vmstat.node1.nr_inactive_file
> > 171.00 Ä 2% -55.3% 76.50 Ä 60% numa-vmstat.node1.nr_zone_inactive_file
> > 2.032e+11 -12.5% 1.777e+11 Ä 27% perf-stat.i.cpu-cycles
> > 2.025e+11 -12.0% 1.782e+11 Ä 27% perf-stat.ps.cpu-cycles
> > 25.00 +37.5% 34.38 Ä 51% vmstat.cpu.id
> > 68.00 -13.2% 59.00 Ä 27% vmstat.cpu.sy
> > 25.24 +37.0% 34.57 Ä 51% iostat.cpu.idle
> > 68.21 -12.7% 59.53 Ä 27% iostat.cpu.system
> > 4.31 Ä100% +200.6% 12.96 Ä 63% iostat.sda.r_await.max
> > 1014 Ä 2% -17.1% 841.00 Ä 10% meminfo.Inactive(file)
> > 30692 Ä 12% -20.9% 24280 Ä 30% meminfo.Mlocked
> > 103627 Ä 27% -32.7% 69720 meminfo.Percpu
> > 255.50 Ä 2% -18.1% 209.25 Ä 10% proc-vmstat.nr_inactive_file
> > 255.50 Ä 2% -18.1% 209.25 Ä 10% proc-vmstat.nr_zone_inactive_file
> > 185035 Ä 22% -22.2% 143917 Ä 25% proc-vmstat.pgmigrate_success
> > 2107 -12.3% 1848 Ä 27% turbostat.Avg_MHz
> > 69.00 -7.1% 64.12 Ä 8% turbostat.PkgTmp
> > 94.63 -2.2% 92.58 Ä 4% turbostat.RAMWatt
> > 96048 +26.8% 121800 Ä 8% softirqs.CPU10.NET_RX
> > 96671 Ä 4% +34.2% 129776 Ä 6% softirqs.CPU15.NET_RX
> > 171243 Ä 3% -12.9% 149135 Ä 8% softirqs.CPU25.NET_RX
> > 165317 Ä 4% -11.4% 146494 Ä 9% softirqs.CPU27.NET_RX
> > 139558 -24.5% 105430 Ä 14% softirqs.CPU58.NET_RX
> > 147836 -15.8% 124408 Ä 6% softirqs.CPU63.NET_RX
> > 129568 -13.8% 111624 Ä 10% softirqs.CPU66.NET_RX
> > 1050 Ä 2% +14.2% 1198 Ä 9% slabinfo.biovec-128.active_objs
> > 1050 Ä 2% +14.2% 1198 Ä 9% slabinfo.biovec-128.num_objs
> > 23129 +19.6% 27668 Ä 6% slabinfo.kmalloc-512.active_objs
> > 766.50 +17.4% 899.75 Ä 6% slabinfo.kmalloc-512.active_slabs
> > 24535 +17.4% 28806 Ä 6% slabinfo.kmalloc-512.num_objs
> > 766.50 +17.4% 899.75 Ä 6% slabinfo.kmalloc-512.num_slabs
> > 1039 Ä 4% -4.3% 994.12 Ä 6% slabinfo.sock_inode_cache.active_slabs
> > 40527 Ä 4% -4.3% 38785 Ä 6% slabinfo.sock_inode_cache.num_objs
> > 1039 Ä 4% -4.3% 994.12 Ä 6% slabinfo.sock_inode_cache.num_slabs
> > 1549456 -43.6% 873443 Ä 24% sched_debug.cfs_rq:/.min_vruntime.stddev
> > 73.25 Ä 5% +74.8% 128.03 Ä 31% sched_debug.cfs_rq:/.nr_spread_over.stddev
> > 18.60 Ä 57% -63.8% 6.73 Ä 64% sched_debug.cfs_rq:/.removed.load_avg.avg
> > 79.57 Ä 44% -44.1% 44.52 Ä 55% sched_debug.cfs_rq:/.removed.load_avg.stddev
> > 857.10 Ä 57% -63.8% 310.09 Ä 64% sched_debug.cfs_rq:/.removed.runnable_sum.avg
> > 3664 Ä 44% -44.1% 2049 Ä 55% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> > 4.91 Ä 42% -45.3% 2.69 Ä 61% sched_debug.cfs_rq:/.removed.util_avg.avg
> > 1549544 -43.6% 874006 Ä 24% sched_debug.cfs_rq:/.spread0.stddev
> > 786.14 Ä 6% -20.1% 628.46 Ä 23% sched_debug.cfs_rq:/.util_avg.avg
> > 1415 Ä 8% -16.7% 1178 Ä 18% sched_debug.cfs_rq:/.util_avg.max
> > 467435 Ä 15% +46.7% 685829 Ä 15% sched_debug.cpu.avg_idle.avg
> > 17972 Ä 8% +631.2% 131410 Ä 34% sched_debug.cpu.avg_idle.min
> > 7.66 Ä 26% +209.7% 23.72 Ä 54% sched_debug.cpu.clock.stddev
> > 7.66 Ä 26% +209.7% 23.72 Ä 54% sched_debug.cpu.clock_task.stddev
> > 618063 Ä 5% -17.0% 513085 Ä 5% sched_debug.cpu.max_idle_balance_cost.max
> > 12083 Ä 28% -85.4% 1768 Ä231% sched_debug.cpu.max_idle_balance_cost.stddev
> > 12857 Ä 16% +2117.7% 285128 Ä106% sched_debug.cpu.yld_count.min
> > 0.55 Ä 6% -0.2 0.37 Ä 51% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
> > 0.30 Ä 21% -0.2 0.14 Ä105% perf-profile.children.cycles-pp.yield_task_fair
> > 0.32 Ä 6% -0.2 0.16 Ä 86% perf-profile.children.cycles-pp.rmap_walk_anon
> > 0.19 -0.1 0.10 Ä 86% perf-profile.children.cycles-pp.page_mapcount_is_zero
> > 0.19 -0.1 0.10 Ä 86% perf-profile.children.cycles-pp.total_mapcount
> > 0.14 -0.1 0.09 Ä 29% perf-profile.children.cycles-pp.start_kernel
> > 0.11 Ä 9% -0.0 0.07 Ä 47% perf-profile.children.cycles-pp.__switch_to
> > 0.10 Ä 14% -0.0 0.06 Ä 45% perf-profile.children.cycles-pp.switch_fpu_return
> > 0.08 Ä 6% -0.0 0.04 Ä 79% perf-profile.children.cycles-pp.__update_load_avg_se
> > 0.12 Ä 13% -0.0 0.09 Ä 23% perf-profile.children.cycles-pp.native_write_msr
> > 0.31 Ä 6% -0.2 0.15 Ä 81% perf-profile.self.cycles-pp.poll_idle
> > 0.50 Ä 6% -0.2 0.35 Ä 50% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
> > 0.18 Ä 2% -0.1 0.10 Ä 86% perf-profile.self.cycles-pp.total_mapcount
> > 0.10 Ä 14% -0.0 0.06 Ä 45% perf-profile.self.cycles-pp.switch_fpu_return
> > 0.10 Ä 10% -0.0 0.06 Ä 47% perf-profile.self.cycles-pp.__switch_to
> > 0.07 Ä 7% -0.0 0.03 Ä100% perf-profile.self.cycles-pp.prep_new_page
> > 0.07 Ä 7% -0.0 0.03 Ä100% perf-profile.self.cycles-pp.llist_add_batch
> > 0.07 Ä 14% -0.0 0.04 Ä 79% perf-profile.self.cycles-pp.__update_load_avg_se
> > 0.12 Ä 13% -0.0 0.09 Ä 23% perf-profile.self.cycles-pp.native_write_msr
> > 66096 Ä 99% -99.8% 148.50 Ä 92% interrupts.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
> > 543.50 Ä 39% -73.3% 145.38 Ä 81% interrupts.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
> > 169.00 Ä 28% -55.3% 75.50 Ä 83% interrupts.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
> > 224.00 Ä 14% -57.6% 95.00 Ä 87% interrupts.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
> > 680.00 Ä 28% -80.5% 132.75 Ä 82% interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
> > 327.50 Ä 31% -39.0% 199.62 Ä 60% interrupts.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
> > 217.50 Ä 19% -51.7% 105.12 Ä 79% interrupts.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
> > 375.00 Ä 46% -78.5% 80.50 Ä 82% interrupts.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
> > 196.50 Ä 3% -51.6% 95.12 Ä 74% interrupts.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
> > 442.50 Ä 45% -73.1% 118.88 Ä 90% interrupts.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
> > 271.00 Ä 8% -53.2% 126.88 Ä 75% interrupts.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
> > 145448 Ä 4% -41.6% 84975 Ä 42% interrupts.CPU1.RES:Rescheduling_interrupts
> > 11773 Ä 19% -38.1% 7290 Ä 52% interrupts.CPU13.TLB:TLB_shootdowns
> > 24177 Ä 15% +356.5% 110368 Ä 58% interrupts.CPU16.RES:Rescheduling_interrupts
> > 3395 Ä 3% +78.3% 6055 Ä 18% interrupts.CPU17.NMI:Non-maskable_interrupts
> > 3395 Ä 3% +78.3% 6055 Ä 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
> > 106701 Ä 41% -55.6% 47425 Ä 56% interrupts.CPU18.RES:Rescheduling_interrupts
> > 327.50 Ä 31% -39.3% 198.88 Ä 60% interrupts.CPU24.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
> > 411618 +53.6% 632283 Ä 77% interrupts.CPU25.LOC:Local_timer_interrupts
> > 16189 Ä 26% -53.0% 7611 Ä 66% interrupts.CPU25.TLB:TLB_shootdowns
> > 407253 +54.4% 628596 Ä 78% interrupts.CPU26.LOC:Local_timer_interrupts
> > 216.50 Ä 19% -51.8% 104.25 Ä 80% interrupts.CPU27.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
> > 7180 -20.9% 5682 Ä 25% interrupts.CPU29.NMI:Non-maskable_interrupts
> > 7180 -20.9% 5682 Ä 25% interrupts.CPU29.PMI:Performance_monitoring_interrupts
> > 15186 Ä 12% -45.5% 8276 Ä 49% interrupts.CPU3.TLB:TLB_shootdowns
> > 13092 Ä 19% -29.5% 9231 Ä 35% interrupts.CPU30.TLB:TLB_shootdowns
> > 13204 Ä 26% -29.3% 9336 Ä 19% interrupts.CPU31.TLB:TLB_shootdowns
> > 374.50 Ä 46% -78.7% 79.62 Ä 83% interrupts.CPU34.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
> > 7188 -25.6% 5345 Ä 26% interrupts.CPU35.NMI:Non-maskable_interrupts
> > 7188 -25.6% 5345 Ä 26% interrupts.CPU35.PMI:Performance_monitoring_interrupts
> > 196.00 Ä 4% -52.0% 94.12 Ä 75% interrupts.CPU36.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
> > 12170 Ä 20% -34.3% 7998 Ä 32% interrupts.CPU39.TLB:TLB_shootdowns
> > 442.00 Ä 45% -73.3% 118.12 Ä 91% interrupts.CPU43.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
> > 12070 Ä 15% -37.2% 7581 Ä 49% interrupts.CPU43.TLB:TLB_shootdowns
> > 7177 -27.6% 5195 Ä 26% interrupts.CPU45.NMI:Non-maskable_interrupts
> > 7177 -27.6% 5195 Ä 26% interrupts.CPU45.PMI:Performance_monitoring_interrupts
> > 271.00 Ä 8% -53.4% 126.38 Ä 75% interrupts.CPU46.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
> > 3591 +84.0% 6607 Ä 12% interrupts.CPU46.NMI:Non-maskable_interrupts
> > 3591 +84.0% 6607 Ä 12% interrupts.CPU46.PMI:Performance_monitoring_interrupts
> > 57614 Ä 30% -34.0% 38015 Ä 28% interrupts.CPU46.RES:Rescheduling_interrupts
> > 149154 Ä 41% -47.2% 78808 Ä 51% interrupts.CPU51.RES:Rescheduling_interrupts
> > 30366 Ä 28% +279.5% 115229 Ä 42% interrupts.CPU52.RES:Rescheduling_interrupts
> > 29690 +355.5% 135237 Ä 57% interrupts.CPU54.RES:Rescheduling_interrupts
> > 213106 Ä 2% -66.9% 70545 Ä 43% interrupts.CPU59.RES:Rescheduling_interrupts
> > 225753 Ä 7% -72.9% 61212 Ä 72% interrupts.CPU60.RES:Rescheduling_interrupts
> > 12430 Ä 14% -41.5% 7276 Ä 52% interrupts.CPU61.TLB:TLB_shootdowns
> > 44552 Ä 22% +229.6% 146864 Ä 36% interrupts.CPU65.RES:Rescheduling_interrupts
> > 126088 Ä 56% -35.3% 81516 Ä 73% interrupts.CPU66.RES:Rescheduling_interrupts
> > 170880 Ä 15% -62.9% 63320 Ä 52% interrupts.CPU68.RES:Rescheduling_interrupts
> > 186033 Ä 10% -39.8% 112012 Ä 41% interrupts.CPU69.RES:Rescheduling_interrupts
> > 679.50 Ä 29% -80.5% 132.25 Ä 82% interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
> > 124750 Ä 18% -39.4% 75553 Ä 43% interrupts.CPU7.RES:Rescheduling_interrupts
> > 158500 Ä 47% -52.1% 75915 Ä 67% interrupts.CPU71.RES:Rescheduling_interrupts
> > 11846 Ä 11% -32.5% 8001 Ä 47% interrupts.CPU72.TLB:TLB_shootdowns
> > 66095 Ä 99% -99.8% 147.62 Ä 93% interrupts.CPU73.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
> > 7221 Ä 2% -31.0% 4982 Ä 35% interrupts.CPU73.NMI:Non-maskable_interrupts
> > 7221 Ä 2% -31.0% 4982 Ä 35% interrupts.CPU73.PMI:Performance_monitoring_interrupts
> > 15304 Ä 14% -47.9% 7972 Ä 31% interrupts.CPU73.TLB:TLB_shootdowns
> > 10918 Ä 3% -31.9% 7436 Ä 36% interrupts.CPU74.TLB:TLB_shootdowns
> > 543.00 Ä 39% -73.3% 144.75 Ä 81% interrupts.CPU76.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
> > 12214 Ä 14% -40.9% 7220 Ä 38% interrupts.CPU79.TLB:TLB_shootdowns
> > 168.00 Ä 29% -55.7% 74.50 Ä 85% interrupts.CPU80.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
> > 28619 Ä 3% +158.4% 73939 Ä 44% interrupts.CPU80.RES:Rescheduling_interrupts
> > 12258 -34.3% 8056 Ä 29% interrupts.CPU80.TLB:TLB_shootdowns
> > 7214 -19.5% 5809 Ä 24% interrupts.CPU82.NMI:Non-maskable_interrupts
> > 7214 -19.5% 5809 Ä 24% interrupts.CPU82.PMI:Performance_monitoring_interrupts
> > 13522 Ä 11% -41.2% 7949 Ä 29% interrupts.CPU84.TLB:TLB_shootdowns
> > 223.50 Ä 14% -57.8% 94.25 Ä 88% interrupts.CPU85.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
> > 11989 Ä 2% -31.7% 8194 Ä 22% interrupts.CPU85.TLB:TLB_shootdowns
> > 121153 Ä 29% -41.4% 70964 Ä 58% interrupts.CPU86.RES:Rescheduling_interrupts
> > 11731 Ä 8% -40.7% 6957 Ä 36% interrupts.CPU86.TLB:TLB_shootdowns
> > 12192 Ä 22% -35.8% 7824 Ä 43% interrupts.CPU87.TLB:TLB_shootdowns
> > 11603 Ä 19% -31.8% 7915 Ä 41% interrupts.CPU89.TLB:TLB_shootdowns
> > 10471 Ä 5% -27.0% 7641 Ä 31% interrupts.CPU91.TLB:TLB_shootdowns
> > 7156 -20.9% 5658 Ä 23% interrupts.CPU92.NMI:Non-maskable_interrupts
> > 7156 -20.9% 5658 Ä 23% interrupts.CPU92.PMI:Performance_monitoring_interrupts
> > 99802 Ä 20% -43.6% 56270 Ä 47% interrupts.CPU92.RES:Rescheduling_interrupts
> > 109162 Ä 18% -28.7% 77839 Ä 26% interrupts.CPU93.RES:Rescheduling_interrupts
> > 15044 Ä 29% -44.4% 8359 Ä 30% interrupts.CPU93.TLB:TLB_shootdowns
> > 110749 Ä 19% -47.3% 58345 Ä 48% interrupts.CPU94.RES:Rescheduling_interrupts
> > 7245 -21.4% 5697 Ä 25% interrupts.CPU95.NMI:Non-maskable_interrupts
> > 7245 -21.4% 5697 Ä 25% interrupts.CPU95.PMI:Performance_monitoring_interrupts
> > 1969 Ä 5% +491.7% 11653 Ä 81% interrupts.IWI:IRQ_work_interrupts
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> > interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
> >
> > commit:
> > fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> > 0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 98318389 +43.0% 1.406e+08 stress-ng.schedpolicy.ops
> > 3277346 +43.0% 4685146 stress-ng.schedpolicy.ops_per_sec
> > 3.506e+08 Ä 4% -10.3% 3.146e+08 Ä 3% stress-ng.sigq.ops
> > 11684738 Ä 4% -10.3% 10485353 Ä 3% stress-ng.sigq.ops_per_sec
> > 3.628e+08 Ä 6% -19.4% 2.925e+08 Ä 6% stress-ng.time.involuntary_context_switches
> > 29456 +2.8% 30285 stress-ng.time.system_time
> > 7636655 Ä 9% +46.6% 11197377 Ä 27% cpuidle.C1E.usage
> > 1111483 Ä 3% -9.5% 1005829 vmstat.system.cs
> > 22638222 Ä 4% +16.5% 26370816 Ä 11% meminfo.Committed_AS
> > 28908 Ä 6% +24.6% 36020 Ä 16% meminfo.KernelStack
> > 7636543 Ä 9% +46.6% 11196090 Ä 27% turbostat.C1E
> > 3.46 Ä 16% -61.2% 1.35 Ä 7% turbostat.Pkg%pc2
> > 217.54 +1.7% 221.33 turbostat.PkgWatt
> > 13.34 Ä 2% +5.8% 14.11 turbostat.RAMWatt
> > 525.50 Ä 8% -15.7% 443.00 Ä 12% slabinfo.biovec-128.active_objs
> > 525.50 Ä 8% -15.7% 443.00 Ä 12% slabinfo.biovec-128.num_objs
> > 28089 Ä 12% -33.0% 18833 Ä 22% slabinfo.pool_workqueue.active_objs
> > 877.25 Ä 12% -32.6% 591.00 Ä 21% slabinfo.pool_workqueue.active_slabs
> > 28089 Ä 12% -32.6% 18925 Ä 21% slabinfo.pool_workqueue.num_objs
> > 877.25 Ä 12% -32.6% 591.00 Ä 21% slabinfo.pool_workqueue.num_slabs
> > 846.75 Ä 6% -18.0% 694.75 Ä 9% slabinfo.skbuff_fclone_cache.active_objs
> > 846.75 Ä 6% -18.0% 694.75 Ä 9% slabinfo.skbuff_fclone_cache.num_objs
> > 63348 Ä 6% -20.7% 50261 Ä 4% softirqs.CPU14.SCHED
> > 44394 Ä 4% +21.4% 53880 Ä 8% softirqs.CPU42.SCHED
> > 52246 Ä 7% -15.1% 44352 softirqs.CPU47.SCHED
> > 58350 Ä 4% -11.0% 51914 Ä 7% softirqs.CPU6.SCHED
> > 58009 Ä 7% -23.8% 44206 Ä 4% softirqs.CPU63.SCHED
> > 49166 Ä 6% +23.4% 60683 Ä 9% softirqs.CPU68.SCHED
> > 44594 Ä 7% +14.3% 50951 Ä 8% softirqs.CPU78.SCHED
> > 46407 Ä 9% +19.6% 55515 Ä 8% softirqs.CPU84.SCHED
> > 55555 Ä 8% -15.5% 46933 Ä 4% softirqs.CPU9.SCHED
> > 198757 Ä 18% +44.1% 286316 Ä 9% numa-meminfo.node0.Active
> > 189280 Ä 19% +37.1% 259422 Ä 7% numa-meminfo.node0.Active(anon)
> > 110438 Ä 33% +68.3% 185869 Ä 16% numa-meminfo.node0.AnonHugePages
> > 143458 Ä 28% +67.7% 240547 Ä 13% numa-meminfo.node0.AnonPages
> > 12438 Ä 16% +61.9% 20134 Ä 37% numa-meminfo.node0.KernelStack
> > 1004379 Ä 7% +16.4% 1168764 Ä 4% numa-meminfo.node0.MemUsed
> > 357111 Ä 24% -41.6% 208655 Ä 29% numa-meminfo.node1.Active
> > 330094 Ä 22% -39.6% 199339 Ä 32% numa-meminfo.node1.Active(anon)
> > 265924 Ä 25% -52.2% 127138 Ä 46% numa-meminfo.node1.AnonHugePages
> > 314059 Ä 22% -49.6% 158305 Ä 36% numa-meminfo.node1.AnonPages
> > 15386 Ä 16% -25.1% 11525 Ä 15% numa-meminfo.node1.KernelStack
> > 1200805 Ä 11% -18.6% 977595 Ä 7% numa-meminfo.node1.MemUsed
> > 965.50 Ä 15% -29.3% 682.25 Ä 43% numa-meminfo.node1.Mlocked
> > 46762 Ä 18% +37.8% 64452 Ä 8% numa-vmstat.node0.nr_active_anon
> > 35393 Ä 27% +68.9% 59793 Ä 12% numa-vmstat.node0.nr_anon_pages
> > 52.75 Ä 33% +71.1% 90.25 Ä 15% numa-vmstat.node0.nr_anon_transparent_hugepages
> > 15.00 Ä 96% +598.3% 104.75 Ä 15% numa-vmstat.node0.nr_inactive_file
> > 11555 Ä 22% +68.9% 19513 Ä 41% numa-vmstat.node0.nr_kernel_stack
> > 550.25 Ä162% +207.5% 1691 Ä 48% numa-vmstat.node0.nr_written
> > 46762 Ä 18% +37.8% 64452 Ä 8% numa-vmstat.node0.nr_zone_active_anon
> > 15.00 Ä 96% +598.3% 104.75 Ä 15% numa-vmstat.node0.nr_zone_inactive_file
> > 82094 Ä 22% -39.5% 49641 Ä 32% numa-vmstat.node1.nr_active_anon
> > 78146 Ä 23% -49.5% 39455 Ä 37% numa-vmstat.node1.nr_anon_pages
> > 129.00 Ä 25% -52.3% 61.50 Ä 47% numa-vmstat.node1.nr_anon_transparent_hugepages
> > 107.75 Ä 12% -85.4% 15.75 Ä103% numa-vmstat.node1.nr_inactive_file
> > 14322 Ä 11% -21.1% 11304 Ä 11% numa-vmstat.node1.nr_kernel_stack
> > 241.00 Ä 15% -29.5% 170.00 Ä 43% numa-vmstat.node1.nr_mlock
> > 82094 Ä 22% -39.5% 49641 Ä 32% numa-vmstat.node1.nr_zone_active_anon
> > 107.75 Ä 12% -85.4% 15.75 Ä103% numa-vmstat.node1.nr_zone_inactive_file
> > 0.81 Ä 5% +0.2 0.99 Ä 10% perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
> > 0.60 Ä 11% +0.2 0.83 Ä 9% perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
> > 1.73 Ä 9% +0.3 2.05 Ä 8% perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
> > 3.92 Ä 5% +0.6 4.49 Ä 7% perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
> > 4.17 Ä 4% +0.6 4.78 Ä 7% perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
> > 5.72 Ä 3% +0.7 6.43 Ä 7% perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
> > 0.24 Ä 54% -0.2 0.07 Ä131% perf-profile.children.cycles-pp.ext4_inode_csum_set
> > 0.45 Ä 3% +0.1 0.56 Ä 4% perf-profile.children.cycles-pp.__might_sleep
> > 0.84 Ä 5% +0.2 1.03 Ä 9% perf-profile.children.cycles-pp.task_rq_lock
> > 0.66 Ä 8% +0.2 0.88 Ä 7% perf-profile.children.cycles-pp.___might_sleep
> > 1.83 Ä 9% +0.3 2.16 Ä 8% perf-profile.children.cycles-pp.__might_fault
> > 4.04 Ä 5% +0.6 4.62 Ä 7% perf-profile.children.cycles-pp.task_sched_runtime
> > 4.24 Ä 4% +0.6 4.87 Ä 7% perf-profile.children.cycles-pp.cpu_clock_sample
> > 5.77 Ä 3% +0.7 6.48 Ä 7% perf-profile.children.cycles-pp.posix_cpu_timer_get
> > 0.22 Ä 11% +0.1 0.28 Ä 15% perf-profile.self.cycles-pp.cpu_clock_sample
> > 0.47 Ä 7% +0.1 0.55 Ä 5% perf-profile.self.cycles-pp.update_curr
> > 0.28 Ä 5% +0.1 0.38 Ä 14% perf-profile.self.cycles-pp.task_rq_lock
> > 0.42 Ä 3% +0.1 0.53 Ä 4% perf-profile.self.cycles-pp.__might_sleep
> > 0.50 Ä 5% +0.1 0.61 Ä 11% perf-profile.self.cycles-pp.task_sched_runtime
> > 0.63 Ä 9% +0.2 0.85 Ä 7% perf-profile.self.cycles-pp.___might_sleep
> > 9180611 Ä 5% +40.1% 12859327 Ä 14% sched_debug.cfs_rq:/.MIN_vruntime.max
> > 1479571 Ä 6% +57.6% 2331469 Ä 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
> > 7951 Ä 6% -52.5% 3773 Ä 17% sched_debug.cfs_rq:/.exec_clock.stddev
> > 321306 Ä 39% -44.2% 179273 sched_debug.cfs_rq:/.load.max
> > 9180613 Ä 5% +40.1% 12859327 Ä 14% sched_debug.cfs_rq:/.max_vruntime.max
> > 1479571 Ä 6% +57.6% 2331469 Ä 14% sched_debug.cfs_rq:/.max_vruntime.stddev
> > 16622378 +20.0% 19940069 Ä 7% sched_debug.cfs_rq:/.min_vruntime.avg
> > 18123901 +19.7% 21686545 Ä 6% sched_debug.cfs_rq:/.min_vruntime.max
> > 14338218 Ä 3% +27.4% 18267927 Ä 7% sched_debug.cfs_rq:/.min_vruntime.min
> > 0.17 Ä 16% +23.4% 0.21 Ä 11% sched_debug.cfs_rq:/.nr_running.stddev
> > 319990 Ä 39% -44.6% 177347 sched_debug.cfs_rq:/.runnable_weight.max
> > -2067420 -33.5% -1375445 sched_debug.cfs_rq:/.spread0.min
> > 1033 Ä 8% -13.7% 891.85 Ä 3% sched_debug.cfs_rq:/.util_est_enqueued.max
> > 93676 Ä 16% -29.0% 66471 Ä 17% sched_debug.cpu.avg_idle.min
> > 10391 Ä 52% +118.9% 22750 Ä 15% sched_debug.cpu.curr->pid.avg
> > 14393 Ä 35% +113.2% 30689 Ä 17% sched_debug.cpu.curr->pid.max
> > 3041 Ä 38% +161.8% 7963 Ä 11% sched_debug.cpu.curr->pid.stddev
> > 3.38 Ä 6% -16.3% 2.83 Ä 5% sched_debug.cpu.nr_running.max
> > 2412687 Ä 4% -16.0% 2027251 Ä 3% sched_debug.cpu.nr_switches.avg
> > 4038819 Ä 3% -20.2% 3223112 Ä 5% sched_debug.cpu.nr_switches.max
> > 834203 Ä 17% -37.8% 518798 Ä 27% sched_debug.cpu.nr_switches.stddev
> > 45.85 Ä 13% +41.2% 64.75 Ä 18% sched_debug.cpu.nr_uninterruptible.max
> > 1937209 Ä 2% +58.5% 3070891 Ä 3% sched_debug.cpu.sched_count.min
> > 1074023 Ä 13% -57.9% 451958 Ä 12% sched_debug.cpu.sched_count.stddev
> > 1283769 Ä 7% +65.1% 2118907 Ä 7% sched_debug.cpu.yld_count.min
> > 714244 Ä 5% -51.9% 343373 Ä 22% sched_debug.cpu.yld_count.stddev
> > 12.54 Ä 9% -18.8% 10.18 Ä 15% perf-stat.i.MPKI
> > 1.011e+10 +2.6% 1.038e+10 perf-stat.i.branch-instructions
> > 13.22 Ä 5% +2.5 15.75 Ä 3% perf-stat.i.cache-miss-rate%
> > 21084021 Ä 6% +33.9% 28231058 Ä 6% perf-stat.i.cache-misses
> > 1143861 Ä 5% -12.1% 1005721 Ä 6% perf-stat.i.context-switches
> > 1.984e+11 +1.8% 2.02e+11 perf-stat.i.cpu-cycles
> > 1.525e+10 +1.3% 1.544e+10 perf-stat.i.dTLB-loads
> > 65.46 -2.7 62.76 Ä 3% perf-stat.i.iTLB-load-miss-rate%
> > 20360883 Ä 4% +10.5% 22500874 Ä 4% perf-stat.i.iTLB-loads
> > 4.963e+10 +2.0% 5.062e+10 perf-stat.i.instructions
> > 181557 -2.4% 177113 perf-stat.i.msec
> > 5350122 Ä 8% +26.5% 6765332 Ä 7% perf-stat.i.node-load-misses
> > 4264320 Ä 3% +24.8% 5321600 Ä 4% perf-stat.i.node-store-misses
> > 6.12 Ä 5% +1.5 7.60 Ä 2% perf-stat.overall.cache-miss-rate%
> > 7646 Ä 6% -17.7% 6295 Ä 3% perf-stat.overall.cycles-between-cache-misses
> > 69.29 -1.1 68.22 perf-stat.overall.iTLB-load-miss-rate%
> > 61.11 Ä 2% +6.6 67.71 Ä 5% perf-stat.overall.node-load-miss-rate%
> > 74.82 +1.8 76.58 perf-stat.overall.node-store-miss-rate%
> > 1.044e+10 +1.8% 1.063e+10 perf-stat.ps.branch-instructions
> > 26325951 Ä 6% +22.9% 32366684 Ä 2% perf-stat.ps.cache-misses
> > 1115530 Ä 3% -9.5% 1009780 perf-stat.ps.context-switches
> > 1.536e+10 +1.0% 1.552e+10 perf-stat.ps.dTLB-loads
> > 44718416 Ä 2% +5.8% 47308605 Ä 3% perf-stat.ps.iTLB-load-misses
> > 19831973 Ä 4% +11.1% 22040029 Ä 4% perf-stat.ps.iTLB-loads
> > 5.064e+10 +1.4% 5.137e+10 perf-stat.ps.instructions
> > 5454694 Ä 9% +26.4% 6892365 Ä 6% perf-stat.ps.node-load-misses
> > 4263688 Ä 4% +24.9% 5325279 Ä 4% perf-stat.ps.node-store-misses
> > 3.001e+13 +1.7% 3.052e+13 perf-stat.total.instructions
> > 18550 -74.9% 4650 Ä173% interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
> > 7642 Ä 9% -20.4% 6086 Ä 2% interrupts.CPU0.CAL:Function_call_interrupts
> > 4376 Ä 22% -75.4% 1077 Ä 41% interrupts.CPU0.TLB:TLB_shootdowns
> > 8402 Ä 5% -19.0% 6806 interrupts.CPU1.CAL:Function_call_interrupts
> > 4559 Ä 20% -73.7% 1199 Ä 15% interrupts.CPU1.TLB:TLB_shootdowns
> > 8423 Ä 4% -20.2% 6725 Ä 2% interrupts.CPU10.CAL:Function_call_interrupts
> > 4536 Ä 14% -75.0% 1135 Ä 20% interrupts.CPU10.TLB:TLB_shootdowns
> > 8303 Ä 3% -18.2% 6795 Ä 2% interrupts.CPU11.CAL:Function_call_interrupts
> > 4404 Ä 11% -71.6% 1250 Ä 35% interrupts.CPU11.TLB:TLB_shootdowns
> > 8491 Ä 6% -21.3% 6683 interrupts.CPU12.CAL:Function_call_interrupts
> > 4723 Ä 20% -77.2% 1077 Ä 17% interrupts.CPU12.TLB:TLB_shootdowns
> > 8403 Ä 5% -20.3% 6700 Ä 2% interrupts.CPU13.CAL:Function_call_interrupts
> > 4557 Ä 19% -74.2% 1175 Ä 22% interrupts.CPU13.TLB:TLB_shootdowns
> > 8459 Ä 4% -18.6% 6884 interrupts.CPU14.CAL:Function_call_interrupts
> > 4559 Ä 18% -69.8% 1376 Ä 13% interrupts.CPU14.TLB:TLB_shootdowns
> > 8305 Ä 7% -17.7% 6833 Ä 2% interrupts.CPU15.CAL:Function_call_interrupts
> > 4261 Ä 25% -67.6% 1382 Ä 24% interrupts.CPU15.TLB:TLB_shootdowns
> > 8277 Ä 5% -19.1% 6696 Ä 3% interrupts.CPU16.CAL:Function_call_interrupts
> > 4214 Ä 22% -69.6% 1282 Ä 8% interrupts.CPU16.TLB:TLB_shootdowns
> > 8258 Ä 5% -18.9% 6694 Ä 3% interrupts.CPU17.CAL:Function_call_interrupts
> > 4461 Ä 19% -74.1% 1155 Ä 21% interrupts.CPU17.TLB:TLB_shootdowns
> > 8457 Ä 6% -20.6% 6717 interrupts.CPU18.CAL:Function_call_interrupts
> > 4889 Ä 34% +60.0% 7822 interrupts.CPU18.NMI:Non-maskable_interrupts
> > 4889 Ä 34% +60.0% 7822 interrupts.CPU18.PMI:Performance_monitoring_interrupts
> > 4731 Ä 22% -77.2% 1078 Ä 10% interrupts.CPU18.TLB:TLB_shootdowns
> > 8160 Ä 5% -18.1% 6684 interrupts.CPU19.CAL:Function_call_interrupts
> > 4311 Ä 20% -74.2% 1114 Ä 13% interrupts.CPU19.TLB:TLB_shootdowns
> > 8464 Ä 2% -18.2% 6927 Ä 3% interrupts.CPU2.CAL:Function_call_interrupts
> > 4938 Ä 14% -70.5% 1457 Ä 18% interrupts.CPU2.TLB:TLB_shootdowns
> > 8358 Ä 6% -19.7% 6715 Ä 3% interrupts.CPU20.CAL:Function_call_interrupts
> > 4567 Ä 24% -74.6% 1160 Ä 35% interrupts.CPU20.TLB:TLB_shootdowns
> > 8460 Ä 4% -22.3% 6577 Ä 2% interrupts.CPU21.CAL:Function_call_interrupts
> > 4514 Ä 18% -76.0% 1084 Ä 22% interrupts.CPU21.TLB:TLB_shootdowns
> > 6677 Ä 6% +19.6% 7988 Ä 9% interrupts.CPU22.CAL:Function_call_interrupts
> > 1288 Ä 14% +209.1% 3983 Ä 35% interrupts.CPU22.TLB:TLB_shootdowns
> > 6751 Ä 2% +24.0% 8370 Ä 9% interrupts.CPU23.CAL:Function_call_interrupts
> > 1037 Ä 29% +323.0% 4388 Ä 36% interrupts.CPU23.TLB:TLB_shootdowns
> > 6844 +20.6% 8251 Ä 9% interrupts.CPU24.CAL:Function_call_interrupts
> > 1205 Ä 17% +229.2% 3967 Ä 40% interrupts.CPU24.TLB:TLB_shootdowns
> > 6880 +21.9% 8389 Ä 7% interrupts.CPU25.CAL:Function_call_interrupts
> > 1228 Ä 19% +245.2% 4240 Ä 35% interrupts.CPU25.TLB:TLB_shootdowns
> > 6494 Ä 8% +25.1% 8123 Ä 9% interrupts.CPU26.CAL:Function_call_interrupts
> > 1141 Ä 13% +262.5% 4139 Ä 32% interrupts.CPU26.TLB:TLB_shootdowns
> > 6852 +19.2% 8166 Ä 7% interrupts.CPU27.CAL:Function_call_interrupts
> > 1298 Ä 8% +197.1% 3857 Ä 31% interrupts.CPU27.TLB:TLB_shootdowns
> > 6563 Ä 6% +25.2% 8214 Ä 8% interrupts.CPU28.CAL:Function_call_interrupts
> > 1176 Ä 8% +237.1% 3964 Ä 33% interrupts.CPU28.TLB:TLB_shootdowns
> > 6842 Ä 2% +21.4% 8308 Ä 8% interrupts.CPU29.CAL:Function_call_interrupts
> > 1271 Ä 11% +223.8% 4118 Ä 33% interrupts.CPU29.TLB:TLB_shootdowns
> > 8418 Ä 3% -21.1% 6643 Ä 2% interrupts.CPU3.CAL:Function_call_interrupts
> > 4677 Ä 11% -75.1% 1164 Ä 16% interrupts.CPU3.TLB:TLB_shootdowns
> > 6798 Ä 3% +21.8% 8284 Ä 7% interrupts.CPU30.CAL:Function_call_interrupts
> > 1219 Ä 12% +236.3% 4102 Ä 30% interrupts.CPU30.TLB:TLB_shootdowns
> > 6503 Ä 4% +25.9% 8186 Ä 6% interrupts.CPU31.CAL:Function_call_interrupts
> > 1046 Ä 15% +289.1% 4072 Ä 32% interrupts.CPU31.TLB:TLB_shootdowns
> > 6949 Ä 3% +17.2% 8141 Ä 8% interrupts.CPU32.CAL:Function_call_interrupts
> > 1241 Ä 23% +210.6% 3854 Ä 34% interrupts.CPU32.TLB:TLB_shootdowns
> > 1487 Ä 26% +161.6% 3889 Ä 46% interrupts.CPU33.TLB:TLB_shootdowns
> > 1710 Ä 44% +140.1% 4105 Ä 36% interrupts.CPU34.TLB:TLB_shootdowns
> > 6957 Ä 2% +15.2% 8012 Ä 9% interrupts.CPU35.CAL:Function_call_interrupts
> > 1165 Ä 8% +223.1% 3765 Ä 38% interrupts.CPU35.TLB:TLB_shootdowns
> > 1423 Ä 24% +173.4% 3892 Ä 33% interrupts.CPU36.TLB:TLB_shootdowns
> > 1279 Ä 29% +224.2% 4148 Ä 39% interrupts.CPU37.TLB:TLB_shootdowns
> > 1301 Ä 20% +226.1% 4244 Ä 35% interrupts.CPU38.TLB:TLB_shootdowns
> > 6906 Ä 2% +18.5% 8181 Ä 8% interrupts.CPU39.CAL:Function_call_interrupts
> > 368828 Ä 20% +96.2% 723710 Ä 7% interrupts.CPU39.RES:Rescheduling_interrupts
> > 1438 Ä 12% +174.8% 3951 Ä 33% interrupts.CPU39.TLB:TLB_shootdowns
> > 8399 Ä 5% -19.2% 6788 Ä 2% interrupts.CPU4.CAL:Function_call_interrupts
> > 4567 Ä 18% -72.7% 1245 Ä 28% interrupts.CPU4.TLB:TLB_shootdowns
> > 6895 +22.4% 8439 Ä 9% interrupts.CPU40.CAL:Function_call_interrupts
> > 1233 Ä 11% +247.1% 4280 Ä 36% interrupts.CPU40.TLB:TLB_shootdowns
> > 6819 Ä 2% +21.3% 8274 Ä 9% interrupts.CPU41.CAL:Function_call_interrupts
> > 1260 Ä 14% +207.1% 3871 Ä 38% interrupts.CPU41.TLB:TLB_shootdowns
> > 1301 Ä 9% +204.7% 3963 Ä 36% interrupts.CPU42.TLB:TLB_shootdowns
> > 6721 Ä 3% +22.3% 8221 Ä 7% interrupts.CPU43.CAL:Function_call_interrupts
> > 1237 Ä 19% +224.8% 4017 Ä 35% interrupts.CPU43.TLB:TLB_shootdowns
> > 8422 Ä 8% -22.7% 6506 Ä 5% interrupts.CPU44.CAL:Function_call_interrupts
> > 15261375 Ä 7% -7.8% 14064176 interrupts.CPU44.LOC:Local_timer_interrupts
> > 4376 Ä 25% -75.7% 1063 Ä 26% interrupts.CPU44.TLB:TLB_shootdowns
> > 8451 Ä 5% -23.7% 6448 Ä 6% interrupts.CPU45.CAL:Function_call_interrupts
> > 4351 Ä 18% -74.9% 1094 Ä 12% interrupts.CPU45.TLB:TLB_shootdowns
> > 8705 Ä 6% -21.2% 6860 Ä 2% interrupts.CPU46.CAL:Function_call_interrupts
> > 4787 Ä 20% -69.5% 1462 Ä 16% interrupts.CPU46.TLB:TLB_shootdowns
> > 8334 Ä 3% -18.9% 6763 interrupts.CPU47.CAL:Function_call_interrupts
> > 4126 Ä 10% -71.3% 1186 Ä 18% interrupts.CPU47.TLB:TLB_shootdowns
> > 8578 Ä 4% -21.7% 6713 interrupts.CPU48.CAL:Function_call_interrupts
> > 4520 Ä 15% -74.5% 1154 Ä 23% interrupts.CPU48.TLB:TLB_shootdowns
> > 8450 Ä 8% -18.8% 6863 Ä 3% interrupts.CPU49.CAL:Function_call_interrupts
> > 4494 Ä 24% -66.5% 1505 Ä 22% interrupts.CPU49.TLB:TLB_shootdowns
> > 8307 Ä 4% -18.0% 6816 Ä 2% interrupts.CPU5.CAL:Function_call_interrupts
> > 7845 -37.4% 4908 Ä 34% interrupts.CPU5.NMI:Non-maskable_interrupts
> > 7845 -37.4% 4908 Ä 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
> > 4429 Ä 17% -69.8% 1339 Ä 20% interrupts.CPU5.TLB:TLB_shootdowns
> > 8444 Ä 4% -21.7% 6613 interrupts.CPU50.CAL:Function_call_interrupts
> > 4282 Ä 16% -76.0% 1029 Ä 17% interrupts.CPU50.TLB:TLB_shootdowns
> > 8750 Ä 6% -22.2% 6803 interrupts.CPU51.CAL:Function_call_interrupts
> > 4755 Ä 20% -73.1% 1277 Ä 15% interrupts.CPU51.TLB:TLB_shootdowns
> > 8478 Ä 6% -20.2% 6766 Ä 2% interrupts.CPU52.CAL:Function_call_interrupts
> > 4337 Ä 20% -72.6% 1190 Ä 22% interrupts.CPU52.TLB:TLB_shootdowns
> > 8604 Ä 7% -21.5% 6750 Ä 4% interrupts.CPU53.CAL:Function_call_interrupts
> > 4649 Ä 17% -74.3% 1193 Ä 23% interrupts.CPU53.TLB:TLB_shootdowns
> > 8317 Ä 9% -19.4% 6706 Ä 3% interrupts.CPU54.CAL:Function_call_interrupts
> > 4372 Ä 12% -75.4% 1076 Ä 29% interrupts.CPU54.TLB:TLB_shootdowns
> > 8439 Ä 3% -18.5% 6876 interrupts.CPU55.CAL:Function_call_interrupts
> > 4415 Ä 11% -71.6% 1254 Ä 17% interrupts.CPU55.TLB:TLB_shootdowns
> > 8869 Ä 6% -22.6% 6864 Ä 2% interrupts.CPU56.CAL:Function_call_interrupts
> > 517594 Ä 13% +123.3% 1155539 Ä 25% interrupts.CPU56.RES:Rescheduling_interrupts
> > 5085 Ä 22% -74.9% 1278 Ä 17% interrupts.CPU56.TLB:TLB_shootdowns
> > 8682 Ä 4% -21.7% 6796 Ä 2% interrupts.CPU57.CAL:Function_call_interrupts
> > 4808 Ä 19% -74.1% 1243 Ä 13% interrupts.CPU57.TLB:TLB_shootdowns
> > 8626 Ä 7% -21.8% 6746 Ä 2% interrupts.CPU58.CAL:Function_call_interrupts
> > 4816 Ä 20% -79.1% 1007 Ä 28% interrupts.CPU58.TLB:TLB_shootdowns
> > 8759 Ä 8% -20.3% 6984 interrupts.CPU59.CAL:Function_call_interrupts
> > 4840 Ä 22% -70.6% 1423 Ä 14% interrupts.CPU59.TLB:TLB_shootdowns
> > 8167 Ä 6% -19.0% 6615 Ä 2% interrupts.CPU6.CAL:Function_call_interrupts
> > 4129 Ä 21% -75.4% 1017 Ä 24% interrupts.CPU6.TLB:TLB_shootdowns
> > 8910 Ä 4% -23.7% 6794 Ä 3% interrupts.CPU60.CAL:Function_call_interrupts
> > 5017 Ä 12% -77.8% 1113 Ä 15% interrupts.CPU60.TLB:TLB_shootdowns
> > 8689 Ä 5% -21.6% 6808 interrupts.CPU61.CAL:Function_call_interrupts
> > 4715 Ä 20% -77.6% 1055 Ä 19% interrupts.CPU61.TLB:TLB_shootdowns
> > 8574 Ä 4% -18.9% 6953 Ä 2% interrupts.CPU62.CAL:Function_call_interrupts
> > 4494 Ä 17% -72.3% 1244 Ä 7% interrupts.CPU62.TLB:TLB_shootdowns
> > 8865 Ä 3% -25.4% 6614 Ä 7% interrupts.CPU63.CAL:Function_call_interrupts
> > 4870 Ä 12% -76.8% 1130 Ä 12% interrupts.CPU63.TLB:TLB_shootdowns
> > 8724 Ä 7% -20.2% 6958 Ä 3% interrupts.CPU64.CAL:Function_call_interrupts
> > 4736 Ä 16% -72.6% 1295 Ä 7% interrupts.CPU64.TLB:TLB_shootdowns
> > 8717 Ä 6% -23.7% 6653 Ä 4% interrupts.CPU65.CAL:Function_call_interrupts
> > 4626 Ä 19% -76.5% 1087 Ä 21% interrupts.CPU65.TLB:TLB_shootdowns
> > 6671 +24.7% 8318 Ä 9% interrupts.CPU66.CAL:Function_call_interrupts
> > 1091 Ä 8% +249.8% 3819 Ä 32% interrupts.CPU66.TLB:TLB_shootdowns
> > 6795 Ä 2% +26.9% 8624 Ä 9% interrupts.CPU67.CAL:Function_call_interrupts
> > 1098 Ä 24% +299.5% 4388 Ä 39% interrupts.CPU67.TLB:TLB_shootdowns
> > 6704 Ä 5% +25.8% 8431 Ä 8% interrupts.CPU68.CAL:Function_call_interrupts
> > 1214 Ä 15% +236.1% 4083 Ä 36% interrupts.CPU68.TLB:TLB_shootdowns
> > 1049 Ä 15% +326.2% 4473 Ä 33% interrupts.CPU69.TLB:TLB_shootdowns
> > 8554 Ä 6% -19.6% 6874 Ä 2% interrupts.CPU7.CAL:Function_call_interrupts
> > 4753 Ä 19% -71.7% 1344 Ä 16% interrupts.CPU7.TLB:TLB_shootdowns
> > 1298 Ä 13% +227.4% 4249 Ä 38% interrupts.CPU70.TLB:TLB_shootdowns
> > 6976 +19.9% 8362 Ä 7% interrupts.CPU71.CAL:Function_call_interrupts
> > 1232748 Ä 18% -57.3% 525824 Ä 33% interrupts.CPU71.RES:Rescheduling_interrupts
> > 1253 Ä 9% +211.8% 3909 Ä 31% interrupts.CPU71.TLB:TLB_shootdowns
> > 1316 Ä 22% +188.7% 3800 Ä 33% interrupts.CPU72.TLB:TLB_shootdowns
> > 6665 Ä 5% +26.5% 8429 Ä 8% interrupts.CPU73.CAL:Function_call_interrupts
> > 1202 Ä 13% +234.1% 4017 Ä 37% interrupts.CPU73.TLB:TLB_shootdowns
> > 6639 Ä 5% +27.0% 8434 Ä 8% interrupts.CPU74.CAL:Function_call_interrupts
> > 1079 Ä 16% +269.4% 3986 Ä 36% interrupts.CPU74.TLB:TLB_shootdowns
> > 1055 Ä 12% +301.2% 4235 Ä 34% interrupts.CPU75.TLB:TLB_shootdowns
> > 7011 Ä 3% +21.6% 8522 Ä 8% interrupts.CPU76.CAL:Function_call_interrupts
> > 1223 Ä 13% +230.7% 4047 Ä 35% interrupts.CPU76.TLB:TLB_shootdowns
> > 6886 Ä 7% +25.6% 8652 Ä 10% interrupts.CPU77.CAL:Function_call_interrupts
> > 1316 Ä 16% +229.8% 4339 Ä 36% interrupts.CPU77.TLB:TLB_shootdowns
> > 7343 Ä 5% +19.1% 8743 Ä 9% interrupts.CPU78.CAL:Function_call_interrupts
> > 1699 Ä 37% +144.4% 4152 Ä 31% interrupts.CPU78.TLB:TLB_shootdowns
> > 7136 Ä 4% +21.4% 8666 Ä 9% interrupts.CPU79.CAL:Function_call_interrupts
> > 1094 Ä 13% +276.2% 4118 Ä 34% interrupts.CPU79.TLB:TLB_shootdowns
> > 8531 Ä 5% -19.5% 6869 Ä 2% interrupts.CPU8.CAL:Function_call_interrupts
> > 4764 Ä 16% -71.0% 1382 Ä 14% interrupts.CPU8.TLB:TLB_shootdowns
> > 1387 Ä 29% +181.8% 3910 Ä 38% interrupts.CPU80.TLB:TLB_shootdowns
> > 1114 Ä 30% +259.7% 4007 Ä 36% interrupts.CPU81.TLB:TLB_shootdowns
> > 7012 +23.9% 8685 Ä 8% interrupts.CPU82.CAL:Function_call_interrupts
> > 1274 Ä 12% +255.4% 4530 Ä 27% interrupts.CPU82.TLB:TLB_shootdowns
> > 6971 Ä 3% +23.8% 8628 Ä 9% interrupts.CPU83.CAL:Function_call_interrupts
> > 1156 Ä 18% +260.1% 4162 Ä 34% interrupts.CPU83.TLB:TLB_shootdowns
> > 7030 Ä 4% +21.0% 8504 Ä 8% interrupts.CPU84.CAL:Function_call_interrupts
> > 1286 Ä 23% +224.0% 4166 Ä 31% interrupts.CPU84.TLB:TLB_shootdowns
> > 7059 +22.4% 8644 Ä 11% interrupts.CPU85.CAL:Function_call_interrupts
> > 1421 Ä 22% +208.8% 4388 Ä 33% interrupts.CPU85.TLB:TLB_shootdowns
> > 7018 Ä 2% +22.8% 8615 Ä 9% interrupts.CPU86.CAL:Function_call_interrupts
> > 1258 Ä 8% +231.1% 4167 Ä 34% interrupts.CPU86.TLB:TLB_shootdowns
> > 1338 Ä 3% +217.9% 4255 Ä 31% interrupts.CPU87.TLB:TLB_shootdowns
> > 8376 Ä 4% -19.0% 6787 Ä 2% interrupts.CPU9.CAL:Function_call_interrupts
> > 4466 Ä 17% -71.2% 1286 Ä 18% interrupts.CPU9.TLB:TLB_shootdowns
> >
> >
> >
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> >
> > Thanks,
> > Oliver Sang
> >