Re: [lkp] [net] 192132b9a0: -17.5% netperf.Throughput_tps

From: Huang Ying
Date: Sun Sep 20 2015 - 21:34:13 EST


On Sun, 2015-09-20 at 19:19 -0600, David Ahern wrote:
> On 9/20/15 6:30 AM, kernel test robot wrote:
> > FYI, we noticed the below changes on
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> > master
> > commit 192132b9a034d87566294be0fba5f8f75c2cf16b ("net: Add support
> > for VRFs to inetpeer cache")
> >
> >
> > ===================================================================
> > ======================
> > tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtim
> > e/nr_threads/cluster/test:
> > lkp-sbx04/netperf/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc
> > -4.9/performance/300s/200%/cs-localhost/TCP_CRR
> >
> > commit:
> > 5345c2e12d41f815c1009c9dee72f3d5fcfd4282
> > 192132b9a034d87566294be0fba5f8f75c2cf16b
> >
>
> Clarification: The reproduce file shows 128 instances of 'netperf -t
> TCP_CRR -c -C -l 300 -H 127.0.0.1' without an '&' on the end. Does
> that
> mean these 128 commands are run serially?

Sorry. It's a script bug, there should be a "&" on the end. Will fix
the script.

>
> Also, this is the end patch of a series that first refactors and then
> adds a capability. The more relevant comparison is 8f58336d3f78 to
> 192132b9a034 (8f58336d3f78 is the commit before the series). Is it
> possible to get this test run on your system comparing those 2
> commits?

Sure. It is attached with the mail.

Best Regards,
Huang, Ying
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/nr_threads/cluster/test:
lkp-sbx04/netperf/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/200%/cs-localhost/TCP_CRR

commit:
8f58336d3f78aef61c8023c18546155f5fdf3224
192132b9a034d87566294be0fba5f8f75c2cf16b

8f58336d3f78aef6 192132b9a034d87566294be0fb
---------------- --------------------------
%stddev %change %stddev
\ | \
2825 Â 2% -17.0% 2344 Â 1% netperf.Throughput_tps
1.089e+08 Â 2% -16.9% 90493497 Â 1% netperf.time.involuntary_context_switches
1.086e+08 Â 2% -17.0% 90186076 Â 1% netperf.time.minor_page_faults
4599 Â 0% +10.1% 5062 Â 1% netperf.time.percent_of_cpu_this_job_got
13112 Â 0% +12.0% 14686 Â 1% netperf.time.system_time
940.67 Â 2% -16.9% 781.88 Â 1% netperf.time.user_time
1.085e+08 Â 2% -17.0% 90055371 Â 1% netperf.time.voluntary_context_switches
4.342e+08 Â 2% -17.0% 3.604e+08 Â 1% softirqs.NET_RX
2258 Â 1% +10.4% 2494 Â 4% uptime.idle
320.54 Â 0% -2.4% 312.95 Â 0% turbostat.CorWatt
376.48 Â 0% -2.0% 368.88 Â 0% turbostat.PkgWatt
1420157 Â 2% -16.9% 1180769 Â 1% vmstat.system.cs
68961 Â 0% +1.0% 69635 Â 0% vmstat.system.in
18193082 Â 11% -14.3% 15594591 Â 16% cpuidle.C1-SNB.time
3054463 Â 16% +45.0% 4428730 Â 19% cpuidle.C1E-SNB.time
238.50 Â 29% +5507.2% 13373 Â 83% cpuidle.C6-SNB.time
1.119e+08 Â 2% -16.9% 93008884 Â 1% proc-vmstat.numa_hit
1.119e+08 Â 2% -16.9% 93008682 Â 1% proc-vmstat.numa_local
6365197 Â 2% -18.4% 5191906 Â 2% proc-vmstat.pgalloc_dma32
1.07e+08 Â 2% -16.6% 89162443 Â 1% proc-vmstat.pgalloc_normal
1.095e+08 Â 2% -16.9% 91044527 Â 1% proc-vmstat.pgfault
1.133e+08 Â 2% -16.7% 94305253 Â 1% proc-vmstat.pgfree
1.089e+08 Â 2% -16.9% 90493497 Â 1% time.involuntary_context_switches
1.086e+08 Â 2% -17.0% 90186076 Â 1% time.minor_page_faults
4599 Â 0% +10.1% 5062 Â 1% time.percent_of_cpu_this_job_got
13112 Â 0% +12.0% 14686 Â 1% time.system_time
940.67 Â 2% -16.9% 781.88 Â 1% time.user_time
1.085e+08 Â 2% -17.0% 90055371 Â 1% time.voluntary_context_switches
28168329 Â 1% -17.9% 23130696 Â 2% numa-numastat.node0.local_node
28168412 Â 1% -17.9% 23130763 Â 2% numa-numastat.node0.numa_hit
28038183 Â 3% -15.4% 23732122 Â 1% numa-numastat.node1.local_node
28038223 Â 3% -15.4% 23732168 Â 1% numa-numastat.node1.numa_hit
27687321 Â 2% -17.1% 22948672 Â 2% numa-numastat.node2.local_node
27687946 Â 2% -17.1% 22948710 Â 2% numa-numastat.node2.numa_hit
27979864 Â 2% -17.1% 23200094 Â 1% numa-numastat.node3.local_node
27980945 Â 2% -17.1% 23200610 Â 1% numa-numastat.node3.numa_hit
1080 Â 93% -52.3% 515.75 Â157% numa-numastat.node3.other_node
90608 Â 2% -11.3% 80327 Â 1% slabinfo.Acpi-State.active_objs
1783 Â 2% -11.3% 1581 Â 1% slabinfo.Acpi-State.active_slabs
90950 Â 2% -11.3% 80684 Â 1% slabinfo.Acpi-State.num_objs
1783 Â 2% -11.3% 1581 Â 1% slabinfo.Acpi-State.num_slabs
45161 Â 4% -27.4% 32776 Â 3% slabinfo.kmalloc-256.active_objs
786.50 Â 4% -29.2% 556.50 Â 4% slabinfo.kmalloc-256.active_slabs
50366 Â 4% -29.2% 35654 Â 4% slabinfo.kmalloc-256.num_objs
786.50 Â 4% -29.2% 556.50 Â 4% slabinfo.kmalloc-256.num_slabs
78638 Â 3% -11.6% 69534 Â 1% slabinfo.kmalloc-64.active_objs
1289 Â 2% -13.1% 1120 Â 1% slabinfo.kmalloc-64.active_slabs
82552 Â 2% -13.1% 71749 Â 1% slabinfo.kmalloc-64.num_objs
1289 Â 2% -13.1% 1120 Â 1% slabinfo.kmalloc-64.num_slabs
14473 Â 3% +14.3% 16542 Â 5% numa-meminfo.node0.Active(anon)
14388 Â 3% +14.4% 16455 Â 5% numa-meminfo.node0.AnonPages
888.50 Â 25% -51.9% 427.50 Â 25% numa-meminfo.node0.Inactive(anon)
984.50 Â 25% -46.9% 522.50 Â 21% numa-meminfo.node0.Shmem
82592 Â 47% -28.0% 59466 Â 59% numa-meminfo.node1.Active
18135 Â 5% -18.4% 14804 Â 3% numa-meminfo.node1.AnonPages
6298 Â 46% -46.7% 3357 Â 0% numa-meminfo.node1.Mapped
444985 Â 10% -7.2% 412807 Â 7% numa-meminfo.node1.MemUsed
731.00 Â 0% +646.5% 5457 Â 19% numa-meminfo.node2.AnonHugePages
15188 Â 2% +35.6% 20597 Â 11% numa-meminfo.node2.AnonPages
1106 Â 96% +762.9% 9548 Â 9% numa-meminfo.node2.Inactive(anon)
3355 Â 0% +175.4% 9242 Â 0% numa-meminfo.node2.Mapped
18285 Â 3% -23.1% 14067 Â 10% numa-meminfo.node3.AnonPages
3615 Â 3% +14.4% 4135 Â 5% numa-vmstat.node0.nr_active_anon
3594 Â 3% +14.4% 4112 Â 5% numa-vmstat.node0.nr_anon_pages
221.50 Â 25% -52.0% 106.25 Â 26% numa-vmstat.node0.nr_inactive_anon
245.50 Â 25% -47.0% 130.00 Â 21% numa-vmstat.node0.nr_shmem
14234686 Â 1% -17.8% 11706912 Â 2% numa-vmstat.node0.numa_hit
14199394 Â 1% -17.8% 11671407 Â 2% numa-vmstat.node0.numa_local
353.50 Â 4% -12.0% 311.00 Â 3% numa-vmstat.node1.nr_alloc_batch
4533 Â 5% -18.4% 3700 Â 3% numa-vmstat.node1.nr_anon_pages
1574 Â 46% -46.7% 838.75 Â 0% numa-vmstat.node1.nr_mapped
14234998 Â 3% -15.2% 12069600 Â 1% numa-vmstat.node1.numa_hit
14225256 Â 3% -15.2% 12059120 Â 1% numa-vmstat.node1.numa_local
3796 Â 2% +35.6% 5148 Â 11% numa-vmstat.node2.nr_anon_pages
274.50 Â 96% +769.4% 2386 Â 9% numa-vmstat.node2.nr_inactive_anon
838.50 Â 0% +175.5% 2310 Â 0% numa-vmstat.node2.nr_mapped
13994982 Â 1% -17.0% 11610237 Â 2% numa-vmstat.node2.numa_hit
13954444 Â 1% -17.1% 11569737 Â 2% numa-vmstat.node2.numa_local
4569 Â 3% -23.0% 3517 Â 10% numa-vmstat.node3.nr_anon_pages
14130971 Â 2% -16.9% 11737181 Â 1% numa-vmstat.node3.numa_hit
14088342 Â 2% -17.0% 11695545 Â 1% numa-vmstat.node3.numa_local
70935 Â 0% -100.0% 0.00 Â -1% latency_stats.avg.call_rwsem_down_write_failed.n_tty_flush_buffer.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
0.00 Â -1% +Inf% 12974495 Â167% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
150392 Â 0% -100.0% 0.00 Â -1% latency_stats.avg.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
1.086e+08 Â 2% -17.0% 90102386 Â 1% latency_stats.hits.inet_csk_accept.inet_accept.SYSC_accept4.SyS_accept.entry_SYSCALL_64_fastpath
2.17e+08 Â 2% -17.0% 1.801e+08 Â 1% latency_stats.hits.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
70935 Â 0% -100.0% 0.00 Â -1% latency_stats.max.call_rwsem_down_write_failed.n_tty_flush_buffer.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
3453 Â 15% +504.0% 20857 Â 38% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
1227 Â 4% +719.3% 10052 Â 78% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 Â -1% +Inf% 13170498 Â164% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
150392 Â 0% -100.0% 0.00 Â -1% latency_stats.max.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
0.00 Â -1% +Inf% 6395 Â100% latency_stats.sum.blk_execute_rq.sg_io.scsi_cmd_ioctl.scsi_cmd_blk_ioctl.cdrom_ioctl.[cdrom].sr_block_ioctl.[sr_mod].blkdev_ioctl.block_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
11473 Â 20% +244.4% 39509 Â 51% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
70935 Â 0% -100.0% 0.00 Â -1% latency_stats.sum.call_rwsem_down_write_failed.n_tty_flush_buffer.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
6178 Â 15% +694.1% 49061 Â 47% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
11241 Â 62% +162.6% 29515 Â 95% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
12653 Â 79% +403.8% 63745 Â 74% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
2593 Â 15% +742.1% 21835 Â 75% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
6461 Â 9% +99.8% 12910 Â 53% latency_stats.sum.ep_poll.SyS_epoll_wait.entry_SYSCALL_64_fastpath
1.795e+10 Â 2% +16.6% 2.092e+10 Â 2% latency_stats.sum.inet_csk_accept.inet_accept.SYSC_accept4.SyS_accept.entry_SYSCALL_64_fastpath
0.00 Â -1% +Inf% 13298537 Â162% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
52820 Â 99% -83.0% 8963 Â156% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_open_context.nfs_release.nfs_file_release.__fput.____fput.task_work_run
1.28e+10 Â 0% -5.7% 1.207e+10 Â 0% latency_stats.sum.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
150392 Â 0% -100.0% 0.00 Â -1% latency_stats.sum.tty_ldisc_hangup.__tty_hangup.tty_ioctl.do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
4.41 Â 8% -30.3% 3.07 Â 6% perf-profile.cpu-cycles.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
2.10 Â 4% -23.8% 1.60 Â 4% perf-profile.cpu-cycles.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
25.79 Â 1% +22.0% 31.45 Â 3% perf-profile.cpu-cycles.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
10.62 Â 4% -20.2% 8.48 Â 2% perf-profile.cpu-cycles.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
23.27 Â 9% -37.0% 14.65 Â 9% perf-profile.cpu-cycles.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
16.07 Â 6% +65.8% 26.64 Â 5% perf-profile.cpu-cycles.____fput.task_work_run.do_notify_resume.int_signal
21.57 Â 18% -47.1% 11.40 Â 15% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
3.35 Â 15% -28.2% 2.41 Â 4% perf-profile.cpu-cycles.__dentry_kill.dput.__fput.____fput.task_work_run
2.55 Â 8% -60.2% 1.01 Â 36% perf-profile.cpu-cycles.__destroy_inode.destroy_inode.evict.iput.__dentry_kill
2.93 Â 3% -30.0% 2.05 Â 3% perf-profile.cpu-cycles.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output.ip_output
29.02 Â 7% -13.1% 25.23 Â 2% perf-profile.cpu-cycles.__do_softirq.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
2.23 Â 27% -45.6% 1.21 Â 3% perf-profile.cpu-cycles.__do_softirq.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.tcp_prequeue_process
15.86 Â 6% +67.0% 26.47 Â 4% perf-profile.cpu-cycles.__fput.____fput.task_work_run.do_notify_resume.int_signal
0.72 Â 20% +57.6% 1.14 Â 5% perf-profile.cpu-cycles.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
25.21 Â 2% +22.9% 30.99 Â 3% perf-profile.cpu-cycles.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
3.15 Â 4% -20.1% 2.52 Â 3% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
29.39 Â 7% -13.1% 25.54 Â 2% perf-profile.cpu-cycles.__local_bh_enable_ip.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk
2.38 Â 27% -46.5% 1.27 Â 4% perf-profile.cpu-cycles.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg
1.55 Â 2% -19.0% 1.26 Â 1% perf-profile.cpu-cycles.__schedule.schedule.schedule_timeout.sk_wait_data.tcp_recvmsg
1.52 Â 0% -13.5% 1.31 Â 1% perf-profile.cpu-cycles.__sock_create.sys_socket.entry_SYSCALL_64_fastpath
21.41 Â 11% -38.4% 13.18 Â 10% perf-profile.cpu-cycles.__tcp_push_pending_frames.tcp_push.tcp_sendmsg.inet_sendmsg.sock_sendmsg
7.75 Â 3% +55.8% 12.09 Â 4% perf-profile.cpu-cycles.__tcp_push_pending_frames.tcp_send_fin.tcp_close.inet_release.sock_release
2.67 Â 6% -22.0% 2.08 Â 6% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
8.39 Â 14% -44.0% 4.70 Â 13% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.sock_def_readable.tcp_child_process.tcp_v4_do_rcv
16.65 Â 15% -43.3% 9.44 Â 12% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish
8.46 Â 14% -43.9% 4.75 Â 12% perf-profile.cpu-cycles.__wake_up_sync_key.sock_def_readable.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv
16.86 Â 15% -43.3% 9.56 Â 12% perf-profile.cpu-cycles.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
0.92 Â 27% -48.4% 0.47 Â 6% perf-profile.cpu-cycles._raw_spin_lock.inode_doinit_with_dentry.selinux_d_instantiate.security_d_instantiate.d_instantiate
1.87 Â 25% -55.5% 0.83 Â 6% perf-profile.cpu-cycles._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode.destroy_inode
0.00 Â -1% +Inf% 6.25 Â 17% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process
0.00 Â -1% +Inf% 5.69 Â 12% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process
0.00 Â -1% +Inf% 8.21 Â 12% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv
0.00 Â -1% +Inf% 5.65 Â 13% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_time_wait.tcp_fin
14.72 Â 28% -59.7% 5.92 Â 26% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
23.37 Â 16% -45.1% 12.82 Â 13% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
8.37 Â 15% -44.1% 4.67 Â 13% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable.tcp_child_process
16.61 Â 15% -43.4% 9.40 Â 12% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv
1.78 Â 8% -22.3% 1.39 Â 1% perf-profile.cpu-cycles.copy_page_to_iter.generic_file_read_iter.__vfs_read.vfs_read.sys_read
1.35 Â 8% -44.6% 0.75 Â 26% perf-profile.cpu-cycles.d_instantiate.sock_alloc_file.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
8.36 Â 15% -44.2% 4.67 Â 12% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable
16.57 Â 15% -43.5% 9.36 Â 12% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.tcp_prequeue
1.14 Â 3% -29.8% 0.80 Â 11% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__schedule.schedule.schedule_timeout
2.71 Â 7% -47.2% 1.43 Â 25% perf-profile.cpu-cycles.destroy_inode.evict.iput.__dentry_kill.dput
1.40 Â 0% -14.1% 1.21 Â 2% perf-profile.cpu-cycles.dev_hard_start_xmit.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output
3.06 Â 4% -28.5% 2.18 Â 4% perf-profile.cpu-cycles.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk
2.04 Â 7% -31.1% 1.40 Â 3% perf-profile.cpu-cycles.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
1.64 Â 5% -17.1% 1.36 Â 2% perf-profile.cpu-cycles.do_mmap_pgoff.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.43 Â 4% -25.9% 1.80 Â 5% perf-profile.cpu-cycles.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
16.32 Â 5% +64.6% 26.85 Â 4% perf-profile.cpu-cycles.do_notify_resume.int_signal
29.25 Â 7% -13.1% 25.43 Â 2% perf-profile.cpu-cycles.do_softirq.part.13.__local_bh_enable_ip.ip_finish_output2.ip_finish_output.ip_output
2.31 Â 27% -46.0% 1.25 Â 3% perf-profile.cpu-cycles.do_softirq.part.13.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg
29.14 Â 7% -13.0% 25.34 Â 2% perf-profile.cpu-cycles.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_finish_output
2.27 Â 27% -45.7% 1.23 Â 3% perf-profile.cpu-cycles.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg
2.43 Â 5% -30.2% 1.70 Â 6% perf-profile.cpu-cycles.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
3.88 Â 17% -36.6% 2.46 Â 1% perf-profile.cpu-cycles.dput.__fput.____fput.task_work_run.do_notify_resume
5.92 Â 5% -23.4% 4.53 Â 4% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
22.73 Â 17% -46.0% 12.28 Â 14% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
23.34 Â 16% -45.2% 12.80 Â 13% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
23.12 Â 16% -45.6% 12.59 Â 13% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
80.91 Â 1% -12.1% 71.09 Â 1% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath
2.99 Â 11% -31.5% 2.05 Â 13% perf-profile.cpu-cycles.evict.iput.__dentry_kill.dput.__fput
2.59 Â 6% -23.8% 1.98 Â 5% perf-profile.cpu-cycles.generic_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.25 Â 2% -11.4% 1.10 Â 6% perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter
1.66 Â 3% -19.0% 1.34 Â 2% perf-profile.cpu-cycles.inet_accept.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
1.06 Â 4% -26.3% 0.79 Â 5% perf-profile.cpu-cycles.inet_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.44 Â 0% -11.2% 1.28 Â 1% perf-profile.cpu-cycles.inet_csk_accept.inet_accept.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
10.07 Â 4% -19.6% 8.09 Â 3% perf-profile.cpu-cycles.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
10.97 Â 3% +112.7% 23.33 Â 6% perf-profile.cpu-cycles.inet_release.sock_release.sock_close.__fput.____fput
22.93 Â 9% -37.5% 14.34 Â 9% perf-profile.cpu-cycles.inet_sendmsg.sock_sendmsg.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
25.36 Â 2% +22.6% 31.09 Â 3% perf-profile.cpu-cycles.inet_stream_connect.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
1.24 Â 9% -71.4% 0.35 Â 45% perf-profile.cpu-cycles.inode_doinit_with_dentry.selinux_d_instantiate.security_d_instantiate.d_instantiate.sock_alloc_file
16.36 Â 5% +64.4% 26.88 Â 4% perf-profile.cpu-cycles.int_signal
32.23 Â 6% -13.7% 27.82 Â 2% perf-profile.cpu-cycles.ip_finish_output.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb
31.97 Â 6% -13.6% 27.61 Â 2% perf-profile.cpu-cycles.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk.ip_queue_xmit
5.47 Â 5% -18.4% 4.46 Â 2% perf-profile.cpu-cycles.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect
26.20 Â 8% -12.3% 22.97 Â 2% perf-profile.cpu-cycles.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
5.40 Â 4% -18.4% 4.41 Â 2% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_connect
1.54 Â 15% -29.0% 1.09 Â 4% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_send_ack
26.00 Â 8% -12.4% 22.77 Â 2% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit
6.08 Â 4% -18.5% 4.96 Â 2% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect
1.23 Â 12% -13.7% 1.06 Â 2% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established
20.32 Â 12% -39.5% 12.29 Â 10% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_push
6.20 Â 3% +75.5% 10.88 Â 5% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin
3.12 Â 13% -27.7% 2.25 Â 7% perf-profile.cpu-cycles.iput.__dentry_kill.dput.__fput.____fput
1.25 Â 2% -26.8% 0.92 Â 9% perf-profile.cpu-cycles.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
1.04 Â 8% -25.7% 0.77 Â 4% perf-profile.cpu-cycles.kthread.ret_from_fork
1.15 Â 1% -22.3% 0.90 Â 4% perf-profile.cpu-cycles.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2
1.52 Â 26% -81.4% 0.28 Â 17% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
0.00 Â -1% +Inf% 6.06 Â 17% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect
0.00 Â -1% +Inf% 5.52 Â 12% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process
0.00 Â -1% +Inf% 7.96 Â 12% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process
0.00 Â -1% +Inf% 5.49 Â 14% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_time_wait
14.12 Â 30% -61.0% 5.50 Â 28% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
1.46 Â 2% -11.0% 1.30 Â 0% perf-profile.cpu-cycles.page_fault.copy_page_to_iter.generic_file_read_iter.__vfs_read.vfs_read
1.86 Â 8% -27.5% 1.35 Â 3% perf-profile.cpu-cycles.path_openat.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
5.37 Â 5% -23.2% 4.12 Â 2% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
1.03 Â 8% -27.2% 0.75 Â 5% perf-profile.cpu-cycles.rcu_nocb_kthread.kthread.ret_from_fork
15.27 Â 6% +50.2% 22.93 Â 5% perf-profile.cpu-cycles.release_sock.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect
1.46 Â 2% +577.3% 9.86 Â 9% perf-profile.cpu-cycles.release_sock.tcp_close.inet_release.sock_release.sock_close
1.04 Â 8% -25.7% 0.77 Â 4% perf-profile.cpu-cycles.ret_from_fork
5.97 Â 5% -23.7% 4.56 Â 4% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
0.99 Â 4% -25.3% 0.74 Â 3% perf-profile.cpu-cycles.schedule.schedule_timeout.inet_csk_accept.inet_accept.SYSC_accept4
1.60 Â 4% -20.1% 1.28 Â 1% perf-profile.cpu-cycles.schedule.schedule_timeout.sk_wait_data.tcp_recvmsg.inet_recvmsg
1.00 Â 5% -25.4% 0.75 Â 3% perf-profile.cpu-cycles.schedule_timeout.inet_csk_accept.inet_accept.SYSC_accept4.sys_accept
1.63 Â 4% -20.7% 1.29 Â 1% perf-profile.cpu-cycles.schedule_timeout.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg
1.30 Â 7% -59.0% 0.53 Â 40% perf-profile.cpu-cycles.security_d_instantiate.d_instantiate.sock_alloc_file.SYSC_accept4.sys_accept
2.46 Â 9% -73.8% 0.65 Â 47% perf-profile.cpu-cycles.security_inode_free.__destroy_inode.destroy_inode.evict.iput
0.97 Â 2% -34.1% 0.64 Â 6% perf-profile.cpu-cycles.security_sock_rcv_skb.sk_filter.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
1.19 Â 2% -19.1% 0.96 Â 1% perf-profile.cpu-cycles.security_socket_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.27 Â 8% -60.0% 0.51 Â 42% perf-profile.cpu-cycles.selinux_d_instantiate.security_d_instantiate.d_instantiate.sock_alloc_file.SYSC_accept4
2.43 Â 11% -75.4% 0.60 Â 47% perf-profile.cpu-cycles.selinux_inode_free_security.security_inode_free.__destroy_inode.destroy_inode.evict
1.17 Â 2% -23.0% 0.91 Â 2% perf-profile.cpu-cycles.selinux_socket_bind.security_socket_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.15 Â 2% -20.4% 0.92 Â 3% perf-profile.cpu-cycles.sk_filter.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
2.46 Â 0% -13.3% 2.14 Â 5% perf-profile.cpu-cycles.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom
1.67 Â 15% -27.5% 1.21 Â 3% perf-profile.cpu-cycles.sock_alloc_file.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
11.09 Â 3% +111.9% 23.48 Â 6% perf-profile.cpu-cycles.sock_close.__fput.____fput.task_work_run.do_notify_resume
8.50 Â 14% -43.5% 4.80 Â 12% perf-profile.cpu-cycles.sock_def_readable.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
10.37 Â 5% -19.9% 8.30 Â 3% perf-profile.cpu-cycles.sock_recvmsg.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
11.06 Â 2% +112.3% 23.48 Â 6% perf-profile.cpu-cycles.sock_release.sock_close.__fput.____fput.task_work_run
23.07 Â 9% -37.2% 14.49 Â 9% perf-profile.cpu-cycles.sock_sendmsg.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
4.49 Â 8% -29.5% 3.17 Â 6% perf-profile.cpu-cycles.sys_accept.entry_SYSCALL_64_fastpath
2.16 Â 4% -25.1% 1.62 Â 4% perf-profile.cpu-cycles.sys_bind.entry_SYSCALL_64_fastpath
25.84 Â 1% +21.9% 31.48 Â 3% perf-profile.cpu-cycles.sys_connect.entry_SYSCALL_64_fastpath
2.04 Â 6% -25.9% 1.51 Â 3% perf-profile.cpu-cycles.sys_mmap.entry_SYSCALL_64_fastpath
2.03 Â 6% -26.1% 1.50 Â 3% perf-profile.cpu-cycles.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.55 Â 2% -22.3% 1.98 Â 5% perf-profile.cpu-cycles.sys_munmap.entry_SYSCALL_64_fastpath
2.49 Â 5% -29.0% 1.77 Â 7% perf-profile.cpu-cycles.sys_open.entry_SYSCALL_64_fastpath
3.00 Â 9% -21.2% 2.37 Â 2% perf-profile.cpu-cycles.sys_read.entry_SYSCALL_64_fastpath
10.71 Â 4% -20.4% 8.52 Â 2% perf-profile.cpu-cycles.sys_recvfrom.entry_SYSCALL_64_fastpath
23.30 Â 9% -36.9% 14.69 Â 9% perf-profile.cpu-cycles.sys_sendto.entry_SYSCALL_64_fastpath
2.40 Â 3% -22.7% 1.85 Â 6% perf-profile.cpu-cycles.sys_socket.entry_SYSCALL_64_fastpath
16.23 Â 6% +65.1% 26.79 Â 5% perf-profile.cpu-cycles.task_work_run.do_notify_resume.int_signal
0.97 Â 8% -27.8% 0.70 Â 1% perf-profile.cpu-cycles.tcp_ack.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.97 Â 7% -25.1% 1.48 Â 2% perf-profile.cpu-cycles.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
10.84 Â 3% +113.8% 23.18 Â 6% perf-profile.cpu-cycles.tcp_close.inet_release.sock_release.sock_close.__fput
2.41 Â 3% -23.8% 1.84 Â 4% perf-profile.cpu-cycles.tcp_conn_request.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
7.36 Â 4% -18.4% 6.00 Â 2% perf-profile.cpu-cycles.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.SYSC_connect
1.11 Â 9% -33.2% 0.74 Â 8% perf-profile.cpu-cycles.tcp_create_openreq_child.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv
2.54 Â 3% +214.0% 7.96 Â 9% perf-profile.cpu-cycles.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.15 Â 11% -27.0% 0.84 Â 3% perf-profile.cpu-cycles.tcp_done.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process
2.42 Â 2% +222.6% 7.82 Â 9% perf-profile.cpu-cycles.tcp_fin.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
1.08 Â 5% +573.3% 7.30 Â 15% perf-profile.cpu-cycles.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.__inet_stream_connect
0.11 Â 0% +5931.8% 6.63 Â 17% perf-profile.cpu-cycles.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv
0.08 Â 5% +6947.1% 5.99 Â 11% perf-profile.cpu-cycles.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv
0.10 Â 5% +8971.1% 8.62 Â 11% perf-profile.cpu-cycles.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock
0.10 Â 10% +5890.0% 5.99 Â 12% perf-profile.cpu-cycles.tcp_get_metrics.tcp_update_metrics.tcp_time_wait.tcp_fin.tcp_data_queue
0.04 Â 42% +19128.6% 6.73 Â 16% perf-profile.cpu-cycles.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock
0.02 Â 50% +30275.0% 6.08 Â 11% perf-profile.cpu-cycles.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv
17.34 Â 15% -42.6% 9.95 Â 12% perf-profile.cpu-cycles.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
7.10 Â 5% -18.3% 5.80 Â 3% perf-profile.cpu-cycles.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom
21.44 Â 11% -38.4% 13.21 Â 10% perf-profile.cpu-cycles.tcp_push.tcp_sendmsg.inet_sendmsg.sock_sendmsg.SYSC_sendto
3.67 Â 6% -28.5% 2.62 Â 2% perf-profile.cpu-cycles.tcp_rcv_established.tcp_v4_do_rcv.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg
0.47 Â 1% +1263.7% 6.48 Â 10% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
2.75 Â 4% +221.9% 8.84 Â 12% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.__inet_stream_connect.inet_stream_connect
1.37 Â 2% +611.7% 9.75 Â 9% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.tcp_close.inet_release
6.54 Â 3% +70.4% 11.15 Â 5% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
9.93 Â 4% -19.3% 8.01 Â 3% perf-profile.cpu-cycles.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.sys_recvfrom
2.38 Â 1% -13.5% 2.06 Â 3% perf-profile.cpu-cycles.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established.tcp_v4_do_rcv.tcp_prequeue_process
0.98 Â 5% -12.3% 0.85 Â 1% perf-profile.cpu-cycles.tcp_send_ack.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.__inet_stream_connect
8.46 Â 4% +46.8% 12.42 Â 4% perf-profile.cpu-cycles.tcp_send_fin.tcp_close.inet_release.sock_release.sock_close
22.73 Â 9% -37.8% 14.14 Â 9% perf-profile.cpu-cycles.tcp_sendmsg.inet_sendmsg.sock_sendmsg.SYSC_sendto.sys_sendto
1.45 Â 5% +393.9% 7.14 Â 10% perf-profile.cpu-cycles.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv
6.46 Â 4% -18.4% 5.27 Â 2% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect
1.67 Â 6% -17.7% 1.37 Â 2% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established.tcp_v4_do_rcv
20.79 Â 11% -39.1% 12.65 Â 10% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_push.tcp_sendmsg
6.51 Â 3% +71.1% 11.13 Â 5% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin.tcp_close
0.06 Â 50% +14437.5% 8.72 Â 11% perf-profile.cpu-cycles.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.tcp_close
0.07 Â 57% +8650.0% 6.12 Â 12% perf-profile.cpu-cycles.tcp_update_metrics.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process
2.54 Â 2% -22.7% 1.96 Â 3% perf-profile.cpu-cycles.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
8.66 Â 4% -18.6% 7.05 Â 2% perf-profile.cpu-cycles.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect
2.83 Â 4% +214.2% 8.91 Â 12% perf-profile.cpu-cycles.tcp_v4_do_rcv.release_sock.__inet_stream_connect.inet_stream_connect.SYSC_connect
1.39 Â 2% +604.5% 9.79 Â 9% perf-profile.cpu-cycles.tcp_v4_do_rcv.release_sock.tcp_close.inet_release.sock_release
3.89 Â 3% -64.1% 1.40 Â 18% perf-profile.cpu-cycles.tcp_v4_do_rcv.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg
18.98 Â 4% +31.9% 25.04 Â 2% perf-profile.cpu-cycles.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
1.21 Â 2% -11.6% 1.07 Â 5% perf-profile.cpu-cycles.tcp_v4_send_synack.tcp_conn_request.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv
1.56 Â 1% -15.5% 1.32 Â 1% perf-profile.cpu-cycles.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
21.30 Â 11% -38.4% 13.11 Â 9% perf-profile.cpu-cycles.tcp_write_xmit.__tcp_push_pending_frames.tcp_push.tcp_sendmsg.inet_sendmsg
7.67 Â 3% +56.8% 12.04 Â 5% perf-profile.cpu-cycles.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin.tcp_close.inet_release
0.52 Â 21% +97.1% 1.03 Â 7% perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
0.62 Â 21% +69.2% 1.06 Â 7% perf-profile.cpu-cycles.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
24.85 Â 15% -43.8% 13.97 Â 12% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
23.76 Â 16% -44.9% 13.10 Â 13% perf-profile.cpu-cycles.ttwu_do_activate.constprop.83.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
1.40 Â 1% -8.7% 1.28 Â 1% perf-profile.cpu-cycles.unmap_region.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
0.47 Â 19% +105.8% 0.98 Â 8% perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
2.94 Â 9% -21.3% 2.31 Â 4% perf-profile.cpu-cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.94 Â 6% -24.4% 1.47 Â 2% perf-profile.cpu-cycles.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.51 Â 2% -24.9% 1.89 Â 5% perf-profile.cpu-cycles.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
412.50 Â 15% -86.1% 57.50 Â 62% sched_debug.cfs_rq[0]:/.blocked_load_avg
430.50 Â 16% -82.7% 74.50 Â 45% sched_debug.cfs_rq[0]:/.tg_load_contrib
66.50 Â 90% +327.8% 284.50 Â 43% sched_debug.cfs_rq[10]:/.blocked_load_avg
7089130 Â 4% +24.4% 8816370 Â 6% sched_debug.cfs_rq[10]:/.min_vruntime
13911 Â 7% -18.7% 11312 Â 6% sched_debug.cfs_rq[10]:/.tg_load_avg
96.00 Â 70% +223.2% 310.25 Â 39% sched_debug.cfs_rq[10]:/.tg_load_contrib
6858681 Â 0% +39.6% 9574721 Â 15% sched_debug.cfs_rq[11]:/.min_vruntime
-1230191 Â-65% -232.2% 1626281 Â104% sched_debug.cfs_rq[11]:/.spread0
13807 Â 8% -18.3% 11284 Â 6% sched_debug.cfs_rq[11]:/.tg_load_avg
601.50 Â 5% +21.2% 729.00 Â 9% sched_debug.cfs_rq[11]:/.utilization_load_avg
7158473 Â 2% +25.0% 8945462 Â 15% sched_debug.cfs_rq[12]:/.min_vruntime
13606 Â 10% -18.8% 11045 Â 9% sched_debug.cfs_rq[12]:/.tg_load_avg
13594 Â 9% -18.9% 11019 Â 9% sched_debug.cfs_rq[13]:/.tg_load_avg
21.50 Â 6% -27.9% 15.50 Â 17% sched_debug.cfs_rq[14]:/.runnable_load_avg
13597 Â 9% -19.5% 10949 Â 9% sched_debug.cfs_rq[14]:/.tg_load_avg
4754449 Â 13% -44.9% 2617952 Â 62% sched_debug.cfs_rq[15]:/.MIN_vruntime
4754449 Â 13% -44.9% 2617952 Â 62% sched_debug.cfs_rq[15]:/.max_vruntime
13581 Â 9% -20.0% 10862 Â 9% sched_debug.cfs_rq[15]:/.tg_load_avg
26.50 Â 1% -28.3% 19.00 Â 18% sched_debug.cfs_rq[16]:/.load
6335443 Â 0% +20.5% 7637118 Â 4% sched_debug.cfs_rq[16]:/.min_vruntime
10.00 Â 30% +327.5% 42.75 Â 27% sched_debug.cfs_rq[16]:/.nr_spread_over
13555 Â 9% -19.8% 10867 Â 9% sched_debug.cfs_rq[16]:/.tg_load_avg
6365178 Â 1% +16.6% 7419294 Â 3% sched_debug.cfs_rq[17]:/.min_vruntime
10.50 Â 14% +276.2% 39.50 Â 12% sched_debug.cfs_rq[17]:/.nr_spread_over
13523 Â 10% -20.1% 10810 Â 9% sched_debug.cfs_rq[17]:/.tg_load_avg
33.50 Â 13% -33.6% 22.25 Â 31% sched_debug.cfs_rq[18]:/.load
6695440 Â 0% +15.1% 7705237 Â 3% sched_debug.cfs_rq[18]:/.min_vruntime
10.50 Â 52% +235.7% 35.25 Â 17% sched_debug.cfs_rq[18]:/.nr_spread_over
21.50 Â 6% -26.7% 15.75 Â 20% sched_debug.cfs_rq[18]:/.runnable_load_avg
13506 Â 10% -20.0% 10804 Â 9% sched_debug.cfs_rq[18]:/.tg_load_avg
6542853 Â 2% +22.7% 8026356 Â 9% sched_debug.cfs_rq[19]:/.min_vruntime
8.00 Â 37% +687.5% 63.00 Â 61% sched_debug.cfs_rq[19]:/.nr_spread_over
-1546202 Â-45% -105.0% 77698 Â1211% sched_debug.cfs_rq[19]:/.spread0
13465 Â 10% -20.0% 10767 Â 9% sched_debug.cfs_rq[19]:/.tg_load_avg
210254 Â 99% +1040.7% 2398287 Â 36% sched_debug.cfs_rq[20]:/.MIN_vruntime
18.00 Â 11% +502.8% 108.50 Â134% sched_debug.cfs_rq[20]:/.load
210254 Â 99% +1040.7% 2398287 Â 36% sched_debug.cfs_rq[20]:/.max_vruntime
6547774 Â 0% +19.6% 7832675 Â 11% sched_debug.cfs_rq[20]:/.min_vruntime
6.50 Â 7% +800.0% 58.50 Â 80% sched_debug.cfs_rq[20]:/.nr_spread_over
13444 Â 10% -20.0% 10751 Â 9% sched_debug.cfs_rq[20]:/.tg_load_avg
3832064 Â 3% -84.0% 613604 Â133% sched_debug.cfs_rq[21]:/.MIN_vruntime
995.00 Â 94% -97.2% 27.50 Â 30% sched_debug.cfs_rq[21]:/.blocked_load_avg
3832064 Â 3% -84.0% 613604 Â133% sched_debug.cfs_rq[21]:/.max_vruntime
6072568 Â 0% +30.3% 7911405 Â 8% sched_debug.cfs_rq[21]:/.min_vruntime
2.50 Â 20% +1680.0% 44.50 Â 70% sched_debug.cfs_rq[21]:/.nr_spread_over
22.00 Â 4% -29.5% 15.50 Â 26% sched_debug.cfs_rq[21]:/.runnable_load_avg
13418 Â 10% -19.9% 10743 Â 9% sched_debug.cfs_rq[21]:/.tg_load_avg
1017 Â 92% -95.7% 43.25 Â 29% sched_debug.cfs_rq[21]:/.tg_load_contrib
6250211 Â 3% +20.8% 7549529 Â 4% sched_debug.cfs_rq[22]:/.min_vruntime
9.00 Â 0% +277.8% 34.00 Â 26% sched_debug.cfs_rq[22]:/.nr_spread_over
13359 Â 10% -19.9% 10706 Â 9% sched_debug.cfs_rq[22]:/.tg_load_avg
2941180 Â 30% -100.0% 0.00 Â 0% sched_debug.cfs_rq[23]:/.MIN_vruntime
2941180 Â 30% -100.0% 0.00 Â 0% sched_debug.cfs_rq[23]:/.max_vruntime
6258078 Â 2% +21.5% 7603127 Â 4% sched_debug.cfs_rq[23]:/.min_vruntime
4.50 Â 55% +627.8% 32.75 Â 41% sched_debug.cfs_rq[23]:/.nr_spread_over
19.00 Â 10% -34.2% 12.50 Â 4% sched_debug.cfs_rq[23]:/.runnable_load_avg
199.00 Â 14% -86.6% 26.75 Â 50% sched_debug.cfs_rq[24]:/.blocked_load_avg
21.00 Â 0% +51.2% 31.75 Â 14% sched_debug.cfs_rq[24]:/.load
217.50 Â 13% -79.0% 45.75 Â 31% sched_debug.cfs_rq[24]:/.tg_load_contrib
4547468 Â 52% -46.0% 2454004 Â 89% sched_debug.cfs_rq[25]:/.MIN_vruntime
10.50 Â 14% +876.2% 102.50 Â 46% sched_debug.cfs_rq[25]:/.blocked_load_avg
4547468 Â 52% -46.0% 2454004 Â 89% sched_debug.cfs_rq[25]:/.max_vruntime
32.50 Â 29% +267.7% 119.50 Â 38% sched_debug.cfs_rq[25]:/.tg_load_contrib
5006815 Â 16% -46.4% 2685439 Â 75% sched_debug.cfs_rq[26]:/.MIN_vruntime
202.50 Â 82% -88.6% 23.00 Â 24% sched_debug.cfs_rq[26]:/.load
5006816 Â 16% -46.4% 2685439 Â 75% sched_debug.cfs_rq[26]:/.max_vruntime
22.50 Â 6% -26.7% 16.50 Â 22% sched_debug.cfs_rq[26]:/.runnable_load_avg
23.00 Â 26% +409.8% 117.25 Â 68% sched_debug.cfs_rq[27]:/.load
16.00 Â 6% +35.9% 21.75 Â 5% sched_debug.cfs_rq[27]:/.runnable_load_avg
560.50 Â 1% +20.5% 675.50 Â 8% sched_debug.cfs_rq[27]:/.utilization_load_avg
54.50 Â 72% -64.7% 19.25 Â 74% sched_debug.cfs_rq[2]:/.blocked_load_avg
783.50 Â 14% -29.4% 553.50 Â 26% sched_debug.cfs_rq[2]:/.utilization_load_avg
182.50 Â 54% -45.9% 98.75 Â104% sched_debug.cfs_rq[30]:/.tg_load_contrib
241.00 Â 80% -80.0% 48.25 Â 91% sched_debug.cfs_rq[32]:/.blocked_load_avg
256.00 Â 75% -74.0% 66.50 Â 68% sched_debug.cfs_rq[32]:/.tg_load_contrib
173.50 Â 18% -66.9% 57.50 Â 82% sched_debug.cfs_rq[33]:/.blocked_load_avg
188.50 Â 16% -60.2% 75.00 Â 66% sched_debug.cfs_rq[33]:/.tg_load_contrib
140.50 Â 34% -70.6% 41.25 Â117% sched_debug.cfs_rq[34]:/.blocked_load_avg
-43518 Â-483% -2133.7% 885021 Â 64% sched_debug.cfs_rq[34]:/.spread0
157.00 Â 30% -64.6% 55.50 Â 81% sched_debug.cfs_rq[34]:/.tg_load_contrib
643.50 Â 5% -9.0% 585.75 Â 6% sched_debug.cfs_rq[36]:/.utilization_load_avg
-412853 Â -2% -208.5% 447796 Â108% sched_debug.cfs_rq[37]:/.spread0
4738014 Â 7% -51.2% 2314342 Â 18% sched_debug.cfs_rq[38]:/.MIN_vruntime
4738014 Â 7% -51.2% 2314342 Â 18% sched_debug.cfs_rq[38]:/.max_vruntime
14340 Â 9% -16.2% 12023 Â 9% sched_debug.cfs_rq[3]:/.tg_load_avg
6938308 Â 2% +28.6% 8920591 Â 14% sched_debug.cfs_rq[40]:/.min_vruntime
7202721 Â 1% +25.7% 9050677 Â 16% sched_debug.cfs_rq[41]:/.min_vruntime
7050596 Â 3% +26.1% 8894040 Â 7% sched_debug.cfs_rq[42]:/.min_vruntime
17.00 Â 58% +475.0% 97.75 Â 71% sched_debug.cfs_rq[42]:/.nr_spread_over
6973443 Â 2% +37.4% 9579368 Â 15% sched_debug.cfs_rq[43]:/.min_vruntime
11.50 Â 21% +843.5% 108.50 Â 60% sched_debug.cfs_rq[43]:/.nr_spread_over
-1116166 Â-63% -246.0% 1630042 Â101% sched_debug.cfs_rq[43]:/.spread0
7073041 Â 1% +28.0% 9051878 Â 17% sched_debug.cfs_rq[44]:/.min_vruntime
23.00 Â 17% -22.8% 17.75 Â 6% sched_debug.cfs_rq[45]:/.load
59.50 Â 47% -73.5% 15.75 Â 80% sched_debug.cfs_rq[46]:/.blocked_load_avg
80.00 Â 37% -59.1% 32.75 Â 29% sched_debug.cfs_rq[46]:/.tg_load_contrib
257.00 Â 84% -83.1% 43.50 Â134% sched_debug.cfs_rq[47]:/.blocked_load_avg
274.50 Â 78% -78.3% 59.50 Â100% sched_debug.cfs_rq[47]:/.tg_load_contrib
271.00 Â 30% -81.7% 49.50 Â 88% sched_debug.cfs_rq[48]:/.blocked_load_avg
6295524 Â 1% +24.1% 7813375 Â 4% sched_debug.cfs_rq[48]:/.min_vruntime
294.00 Â 31% -76.3% 69.75 Â 75% sched_debug.cfs_rq[48]:/.tg_load_contrib
6369900 Â 2% +16.9% 7447878 Â 3% sched_debug.cfs_rq[49]:/.min_vruntime
13.50 Â 33% +150.0% 33.75 Â 14% sched_debug.cfs_rq[49]:/.nr_spread_over
514.50 Â 59% -91.0% 46.50 Â117% sched_debug.cfs_rq[4]:/.blocked_load_avg
14319 Â 9% -17.5% 11810 Â 9% sched_debug.cfs_rq[4]:/.tg_load_avg
537.50 Â 58% -88.3% 62.75 Â 88% sched_debug.cfs_rq[4]:/.tg_load_contrib
4700426 Â 15% -53.8% 2169822 Â105% sched_debug.cfs_rq[50]:/.MIN_vruntime
4700426 Â 15% -53.8% 2169822 Â105% sched_debug.cfs_rq[50]:/.max_vruntime
6663303 Â 1% +14.4% 7623035 Â 4% sched_debug.cfs_rq[50]:/.min_vruntime
7.50 Â 20% +346.7% 33.50 Â 10% sched_debug.cfs_rq[50]:/.nr_spread_over
6460677 Â 1% +24.8% 8066135 Â 8% sched_debug.cfs_rq[51]:/.min_vruntime
9.00 Â 0% +450.0% 49.50 Â 78% sched_debug.cfs_rq[51]:/.nr_spread_over
22.00 Â 0% -22.7% 17.00 Â 13% sched_debug.cfs_rq[51]:/.runnable_load_avg
-1629120 Â-46% -107.2% 116582 Â698% sched_debug.cfs_rq[51]:/.spread0
493.50 Â 3% +42.8% 704.50 Â 1% sched_debug.cfs_rq[51]:/.utilization_load_avg
23.00 Â 17% +31.5% 30.25 Â 5% sched_debug.cfs_rq[52]:/.load
6714359 Â 1% +15.3% 7739751 Â 10% sched_debug.cfs_rq[52]:/.min_vruntime
6.00 Â 0% +691.7% 47.50 Â 84% sched_debug.cfs_rq[52]:/.nr_spread_over
1582 Â 95% -95.3% 74.75 Â101% sched_debug.cfs_rq[53]:/.blocked_load_avg
31.00 Â 0% -30.6% 21.50 Â 7% sched_debug.cfs_rq[53]:/.load
6132965 Â 0% +29.6% 7948754 Â 7% sched_debug.cfs_rq[53]:/.min_vruntime
4.00 Â 0% +950.0% 42.00 Â 55% sched_debug.cfs_rq[53]:/.nr_spread_over
26.00 Â 19% -37.5% 16.25 Â 5% sched_debug.cfs_rq[53]:/.runnable_load_avg
1609 Â 94% -94.4% 90.50 Â 82% sched_debug.cfs_rq[53]:/.tg_load_contrib
6276273 Â 3% +20.3% 7548645 Â 3% sched_debug.cfs_rq[54]:/.min_vruntime
8.00 Â 50% +281.2% 30.50 Â 30% sched_debug.cfs_rq[54]:/.nr_spread_over
6200148 Â 3% +22.5% 7594242 Â 5% sched_debug.cfs_rq[55]:/.min_vruntime
8.50 Â 29% +205.9% 26.00 Â 11% sched_debug.cfs_rq[55]:/.nr_spread_over
11686 Â 19% -20.7% 9267 Â 1% sched_debug.cfs_rq[55]:/.tg_load_avg
3945625 Â 3% -63.4% 1444529 Â 60% sched_debug.cfs_rq[57]:/.MIN_vruntime
164.50 Â 49% -79.8% 33.25 Â106% sched_debug.cfs_rq[57]:/.blocked_load_avg
3945625 Â 3% -63.4% 1444529 Â 60% sched_debug.cfs_rq[57]:/.max_vruntime
16.50 Â 3% -7.6% 15.25 Â 2% sched_debug.cfs_rq[57]:/.runnable_load_avg
181.50 Â 45% -73.3% 48.50 Â 72% sched_debug.cfs_rq[57]:/.tg_load_contrib
558.50 Â 11% +23.7% 691.00 Â 6% sched_debug.cfs_rq[58]:/.utilization_load_avg
4344214 Â 10% -49.2% 2204926 Â 62% sched_debug.cfs_rq[59]:/.MIN_vruntime
4344214 Â 10% -49.2% 2204926 Â 62% sched_debug.cfs_rq[59]:/.max_vruntime
23.00 Â 13% -34.8% 15.00 Â 9% sched_debug.cfs_rq[59]:/.runnable_load_avg
505.00 Â 1% +24.8% 630.00 Â 5% sched_debug.cfs_rq[59]:/.utilization_load_avg
23.00 Â 69% +1059.8% 266.75 Â103% sched_debug.cfs_rq[5]:/.blocked_load_avg
-335362 Â 0% -244.8% 485675 Â105% sched_debug.cfs_rq[5]:/.spread0
14043 Â 7% -16.6% 11716 Â 9% sched_debug.cfs_rq[5]:/.tg_load_avg
38.50 Â 35% +635.7% 283.25 Â 97% sched_debug.cfs_rq[5]:/.tg_load_contrib
3596560 Â 16% -90.0% 361211 Â109% sched_debug.cfs_rq[61]:/.MIN_vruntime
233.50 Â 6% -50.7% 115.00 Â 69% sched_debug.cfs_rq[61]:/.blocked_load_avg
3596560 Â 16% -90.0% 361211 Â109% sched_debug.cfs_rq[61]:/.max_vruntime
17.50 Â 2% -22.9% 13.50 Â 15% sched_debug.cfs_rq[61]:/.runnable_load_avg
250.00 Â 6% -48.4% 129.00 Â 61% sched_debug.cfs_rq[61]:/.tg_load_contrib
3959868 Â 18% -70.8% 1157307 Â145% sched_debug.cfs_rq[6]:/.MIN_vruntime
3959868 Â 18% -70.8% 1157307 Â145% sched_debug.cfs_rq[6]:/.max_vruntime
115.50 Â 57% -41.3% 67.75 Â 88% sched_debug.cfs_rq[6]:/.nr_spread_over
14023 Â 7% -18.4% 11449 Â 7% sched_debug.cfs_rq[6]:/.tg_load_avg
4262919 Â 9% -59.6% 1721402 Â 47% sched_debug.cfs_rq[7]:/.MIN_vruntime
27.00 Â 11% -21.3% 21.25 Â 18% sched_debug.cfs_rq[7]:/.load
4262919 Â 9% -59.6% 1721402 Â 47% sched_debug.cfs_rq[7]:/.max_vruntime
13983 Â 7% -19.5% 11254 Â 6% sched_debug.cfs_rq[7]:/.tg_load_avg
6958883 Â 1% +28.8% 8965833 Â 15% sched_debug.cfs_rq[8]:/.min_vruntime
13965 Â 7% -19.2% 11289 Â 6% sched_debug.cfs_rq[8]:/.tg_load_avg
92.50 Â 79% +303.8% 373.50 Â 73% sched_debug.cfs_rq[9]:/.blocked_load_avg
7295708 Â 1% +25.1% 9123576 Â 16% sched_debug.cfs_rq[9]:/.min_vruntime
13929 Â 7% -18.7% 11329 Â 5% sched_debug.cfs_rq[9]:/.tg_load_avg
106.50 Â 69% +267.4% 391.25 Â 71% sched_debug.cfs_rq[9]:/.tg_load_contrib
3425637 Â 3% -19.0% 2775877 Â 1% sched_debug.cpu#0.nr_switches
3435572 Â 3% -18.9% 2785232 Â 1% sched_debug.cpu#0.sched_count
3411334 Â 3% -19.7% 2737980 Â 1% sched_debug.cpu#0.ttwu_count
3406424 Â 3% -19.8% 2733371 Â 1% sched_debug.cpu#0.ttwu_local
3362926 Â 1% -16.0% 2824862 Â 1% sched_debug.cpu#1.nr_switches
3363852 Â 1% -16.0% 2827260 Â 1% sched_debug.cpu#1.sched_count
3351289 Â 0% -16.1% 2810445 Â 1% sched_debug.cpu#1.ttwu_count
3345266 Â 1% -16.1% 2807598 Â 1% sched_debug.cpu#1.ttwu_local
2.00 Â 0% -50.0% 1.00 Â 0% sched_debug.cpu#10.nr_running
3409466 Â 2% -17.3% 2820801 Â 1% sched_debug.cpu#10.nr_switches
3409919 Â 2% -17.3% 2821090 Â 1% sched_debug.cpu#10.sched_count
3405043 Â 2% -17.6% 2806740 Â 1% sched_debug.cpu#10.ttwu_count
3402736 Â 2% -17.6% 2804699 Â 1% sched_debug.cpu#10.ttwu_local
3360934 Â 4% -16.1% 2821155 Â 2% sched_debug.cpu#11.nr_switches
3361125 Â 4% -16.1% 2821629 Â 2% sched_debug.cpu#11.sched_count
3349685 Â 4% -15.5% 2829002 Â 2% sched_debug.cpu#11.ttwu_count
3347662 Â 4% -16.1% 2807327 Â 2% sched_debug.cpu#11.ttwu_local
198221 Â 4% -18.0% 162576 Â 13% sched_debug.cpu#12.avg_idle
3336806 Â 3% -16.3% 2791418 Â 2% sched_debug.cpu#12.nr_switches
3337240 Â 3% -16.3% 2793004 Â 2% sched_debug.cpu#12.sched_count
3351757 Â 2% -16.7% 2792849 Â 3% sched_debug.cpu#12.ttwu_count
3320369 Â 3% -16.7% 2766292 Â 2% sched_debug.cpu#12.ttwu_local
3348270 Â 4% -16.0% 2813855 Â 1% sched_debug.cpu#13.nr_switches
3348807 Â 4% -15.9% 2817516 Â 1% sched_debug.cpu#13.sched_count
3367005 Â 4% -16.7% 2804182 Â 2% sched_debug.cpu#13.ttwu_count
3337270 Â 5% -16.1% 2800765 Â 2% sched_debug.cpu#13.ttwu_local
17.50 Â 2% -12.9% 15.25 Â 8% sched_debug.cpu#14.cpu_load[3]
3345878 Â 1% -15.3% 2833788 Â 2% sched_debug.cpu#14.nr_switches
3346642 Â 1% -15.2% 2836414 Â 2% sched_debug.cpu#14.sched_count
3362259 Â 1% -15.9% 2826049 Â 2% sched_debug.cpu#14.ttwu_count
3334949 Â 0% -15.3% 2823748 Â 2% sched_debug.cpu#14.ttwu_local
20.00 Â 10% -22.5% 15.50 Â 17% sched_debug.cpu#15.cpu_load[1]
21.50 Â 16% -24.4% 16.25 Â 17% sched_debug.cpu#15.cpu_load[2]
22.50 Â 15% -30.0% 15.75 Â 15% sched_debug.cpu#15.cpu_load[3]
19.50 Â 23% -32.1% 13.25 Â 16% sched_debug.cpu#15.cpu_load[4]
3361095 Â 5% -15.7% 2833752 Â 1% sched_debug.cpu#15.nr_switches
3361392 Â 5% -15.7% 2834059 Â 1% sched_debug.cpu#15.sched_count
3371948 Â 4% -15.8% 2838452 Â 2% sched_debug.cpu#15.ttwu_count
3344405 Â 5% -15.8% 2814682 Â 1% sched_debug.cpu#15.ttwu_local
20.00 Â 25% -28.8% 14.25 Â 12% sched_debug.cpu#16.cpu_load[4]
3390283 Â 2% -17.6% 2791917 Â 1% sched_debug.cpu#16.nr_switches
3391989 Â 2% -17.6% 2794492 Â 1% sched_debug.cpu#16.sched_count
3390096 Â 2% -18.0% 2780908 Â 1% sched_debug.cpu#16.ttwu_count
3385723 Â 2% -18.2% 2770502 Â 1% sched_debug.cpu#16.ttwu_local
25.00 Â 20% -25.0% 18.75 Â 15% sched_debug.cpu#17.cpu_load[0]
25.50 Â 21% -29.4% 18.00 Â 8% sched_debug.cpu#17.cpu_load[1]
26.00 Â 19% -34.6% 17.00 Â 8% sched_debug.cpu#17.cpu_load[2]
25.00 Â 16% -36.0% 16.00 Â 7% sched_debug.cpu#17.cpu_load[3]
22.50 Â 11% -40.0% 13.50 Â 3% sched_debug.cpu#17.cpu_load[4]
3346577 Â 3% -16.2% 2804619 Â 2% sched_debug.cpu#17.nr_switches
3350198 Â 3% -16.2% 2806250 Â 2% sched_debug.cpu#17.sched_count
3360570 Â 2% -17.0% 2790359 Â 2% sched_debug.cpu#17.ttwu_count
3332377 Â 3% -16.3% 2788295 Â 2% sched_debug.cpu#17.ttwu_local
17.00 Â 5% -20.6% 13.50 Â 8% sched_debug.cpu#18.cpu_load[4]
3388086 Â 2% -17.7% 2787730 Â 1% sched_debug.cpu#18.nr_switches
3389536 Â 2% -17.7% 2788705 Â 1% sched_debug.cpu#18.sched_count
3402734 Â 2% -18.4% 2776889 Â 1% sched_debug.cpu#18.ttwu_count
3378329 Â 2% -18.3% 2759313 Â 1% sched_debug.cpu#18.ttwu_local
19.50 Â 7% -16.7% 16.25 Â 13% sched_debug.cpu#19.cpu_load[3]
3370938 Â 1% -17.1% 2794874 Â 1% sched_debug.cpu#19.nr_switches
3371834 Â 1% -16.9% 2800760 Â 1% sched_debug.cpu#19.sched_count
3366351 Â 1% -17.3% 2785348 Â 1% sched_debug.cpu#19.ttwu_count
3362861 Â 1% -17.3% 2781538 Â 1% sched_debug.cpu#19.ttwu_local
18.50 Â 2% -17.6% 15.25 Â 8% sched_debug.cpu#2.cpu_load[2]
17.50 Â 2% -18.6% 14.25 Â 7% sched_debug.cpu#2.cpu_load[3]
16.00 Â 0% -21.9% 12.50 Â 4% sched_debug.cpu#2.cpu_load[4]
2620 Â 41% -44.1% 1465 Â 1% sched_debug.cpu#2.curr->pid
3347900 Â 2% -16.4% 2800049 Â 1% sched_debug.cpu#2.nr_switches
3348447 Â 2% -16.2% 2805748 Â 1% sched_debug.cpu#2.sched_count
3315599 Â 1% -15.9% 2789182 Â 1% sched_debug.cpu#2.ttwu_count
3312074 Â 1% -15.8% 2787168 Â 1% sched_debug.cpu#2.ttwu_local
20.50 Â 7% -18.3% 16.75 Â 6% sched_debug.cpu#20.cpu_load[0]
19.50 Â 2% -14.1% 16.75 Â 6% sched_debug.cpu#20.cpu_load[1]
19.50 Â 2% -15.4% 16.50 Â 6% sched_debug.cpu#20.cpu_load[2]
19.00 Â 5% -17.1% 15.75 Â 6% sched_debug.cpu#20.cpu_load[3]
18.00 Â 5% -27.8% 13.00 Â 5% sched_debug.cpu#20.cpu_load[4]
3329603 Â 1% -16.5% 2779014 Â 2% sched_debug.cpu#20.nr_switches
-2.00 Â-50% -225.0% 2.50 Â 60% sched_debug.cpu#20.nr_uninterruptible
3330833 Â 1% -16.5% 2780361 Â 2% sched_debug.cpu#20.sched_count
3330334 Â 1% -16.8% 2771814 Â 2% sched_debug.cpu#20.ttwu_count
3319066 Â 0% -17.1% 2752947 Â 3% sched_debug.cpu#20.ttwu_local
21.50 Â 16% -19.8% 17.25 Â 13% sched_debug.cpu#21.cpu_load[0]
3359699 Â 2% -16.8% 2795372 Â 1% sched_debug.cpu#21.nr_switches
3362760 Â 2% -16.8% 2796868 Â 1% sched_debug.cpu#21.sched_count
3357026 Â 2% -17.0% 2787801 Â 2% sched_debug.cpu#21.ttwu_count
3354073 Â 2% -17.1% 2780178 Â 2% sched_debug.cpu#21.ttwu_local
28.50 Â 36% -45.6% 15.50 Â 16% sched_debug.cpu#22.cpu_load[0]
26.00 Â 30% -40.4% 15.50 Â 10% sched_debug.cpu#22.cpu_load[1]
20.50 Â 26% -30.5% 14.25 Â 7% sched_debug.cpu#22.cpu_load[4]
3335974 Â 3% -15.9% 2803908 Â 2% sched_debug.cpu#22.nr_switches
3337011 Â 3% -15.9% 2805212 Â 2% sched_debug.cpu#22.sched_count
777.50 Â 11% +105.5% 1597 Â 32% sched_debug.cpu#22.sched_goidle
3358426 Â 4% -16.9% 2792197 Â 2% sched_debug.cpu#22.ttwu_count
3323362 Â 3% -16.1% 2788179 Â 2% sched_debug.cpu#22.ttwu_local
18.50 Â 2% -25.7% 13.75 Â 9% sched_debug.cpu#23.cpu_load[4]
3376573 Â 2% -16.9% 2805389 Â 2% sched_debug.cpu#23.nr_switches
3377409 Â 2% -16.9% 2807165 Â 2% sched_debug.cpu#23.sched_count
3375600 Â 2% -16.6% 2814478 Â 1% sched_debug.cpu#23.ttwu_count
3373481 Â 2% -17.2% 2794862 Â 2% sched_debug.cpu#23.ttwu_local
21.50 Â 2% -15.1% 18.25 Â 7% sched_debug.cpu#24.cpu_load[3]
17.50 Â 2% -12.9% 15.25 Â 8% sched_debug.cpu#24.cpu_load[4]
3422252 Â 3% -18.0% 2807177 Â 2% sched_debug.cpu#24.nr_switches
3422784 Â 3% -18.0% 2807540 Â 2% sched_debug.cpu#24.sched_count
3419545 Â 3% -17.2% 2830470 Â 2% sched_debug.cpu#24.ttwu_count
3417668 Â 3% -18.3% 2793422 Â 2% sched_debug.cpu#24.ttwu_local
3410986 Â 3% -17.2% 2825710 Â 2% sched_debug.cpu#25.nr_switches
3411188 Â 3% -17.1% 2826362 Â 2% sched_debug.cpu#25.sched_count
3407611 Â 3% -17.6% 2806667 Â 2% sched_debug.cpu#25.ttwu_count
3404481 Â 3% -17.7% 2800759 Â 2% sched_debug.cpu#25.ttwu_local
3326290 Â 1% -15.4% 2815129 Â 1% sched_debug.cpu#26.nr_switches
3326843 Â 1% -15.4% 2815612 Â 1% sched_debug.cpu#26.sched_count
3305814 Â 0% -15.3% 2800669 Â 2% sched_debug.cpu#26.ttwu_count
3300778 Â 0% -15.2% 2797457 Â 1% sched_debug.cpu#26.ttwu_local
16.00 Â 6% +26.6% 20.25 Â 6% sched_debug.cpu#27.cpu_load[0]
16.00 Â 0% +18.8% 19.00 Â 6% sched_debug.cpu#27.cpu_load[1]
3338612 Â 1% -16.1% 2802024 Â 1% sched_debug.cpu#27.nr_switches
3339253 Â 1% -16.0% 2803686 Â 1% sched_debug.cpu#27.sched_count
3319860 Â 1% -16.5% 2772945 Â 1% sched_debug.cpu#27.ttwu_count
3317725 Â 1% -16.5% 2769863 Â 1% sched_debug.cpu#27.ttwu_local
3323634 Â 1% -15.4% 2813047 Â 1% sched_debug.cpu#28.nr_switches
0.50 Â900% -1550.0% -7.25 Â-24% sched_debug.cpu#28.nr_uninterruptible
3325278 Â 1% -15.3% 2815887 Â 1% sched_debug.cpu#28.sched_count
3305205 Â 0% -15.0% 2808674 Â 1% sched_debug.cpu#28.ttwu_count
3302899 Â 0% -15.0% 2806132 Â 1% sched_debug.cpu#28.ttwu_local
17.00 Â 17% -23.5% 13.00 Â 5% sched_debug.cpu#29.cpu_load[4]
28.50 Â 1% -21.1% 22.50 Â 7% sched_debug.cpu#29.load
2.00 Â 0% -50.0% 1.00 Â 0% sched_debug.cpu#29.nr_running
3392389 Â 2% -17.1% 2812863 Â 1% sched_debug.cpu#29.nr_switches
3396101 Â 3% -17.1% 2815602 Â 1% sched_debug.cpu#29.sched_count
3383972 Â 2% -17.7% 2783822 Â 0% sched_debug.cpu#29.ttwu_count
3382098 Â 2% -17.9% 2777215 Â 0% sched_debug.cpu#29.ttwu_local
3379478 Â 3% -17.8% 2777895 Â 1% sched_debug.cpu#3.nr_switches
3380029 Â 3% -17.8% 2779026 Â 1% sched_debug.cpu#3.sched_count
3369398 Â 3% -17.7% 2773613 Â 1% sched_debug.cpu#3.ttwu_count
3364962 Â 3% -17.7% 2770455 Â 1% sched_debug.cpu#3.ttwu_local
88916 Â 66% +270.2% 329187 Â 62% sched_debug.cpu#30.avg_idle
3367748 Â 4% -16.5% 2813130 Â 1% sched_debug.cpu#30.nr_switches
3377256 Â 4% -16.7% 2813415 Â 1% sched_debug.cpu#30.sched_count
3388873 Â 3% -17.2% 2805998 Â 1% sched_debug.cpu#30.ttwu_count
3359180 Â 4% -16.5% 2803761 Â 1% sched_debug.cpu#30.ttwu_local
22.50 Â 11% -28.9% 16.00 Â 7% sched_debug.cpu#31.cpu_load[2]
22.50 Â 15% -30.0% 15.75 Â 5% sched_debug.cpu#31.cpu_load[3]
20.00 Â 20% -35.0% 13.00 Â 0% sched_debug.cpu#31.cpu_load[4]
2141 Â 28% -30.1% 1496 Â 2% sched_debug.cpu#31.curr->pid
3395456 Â 3% -17.3% 2809733 Â 1% sched_debug.cpu#31.nr_switches
3397100 Â 3% -17.3% 2809973 Â 1% sched_debug.cpu#31.sched_count
3382361 Â 3% -17.1% 2802556 Â 1% sched_debug.cpu#31.ttwu_count
3377929 Â 3% -17.1% 2800372 Â 1% sched_debug.cpu#31.ttwu_local
186866 Â 0% +108.0% 388617 Â 75% sched_debug.cpu#32.avg_idle
3367071 Â 4% -17.5% 2779491 Â 1% sched_debug.cpu#32.nr_switches
-2.50 Â-20% -90.0% -0.25 Â-331% sched_debug.cpu#32.nr_uninterruptible
3367419 Â 4% -17.4% 2779888 Â 1% sched_debug.cpu#32.sched_count
3349904 Â 4% -16.8% 2788296 Â 2% sched_debug.cpu#32.ttwu_count
3348294 Â 4% -17.7% 2756567 Â 1% sched_debug.cpu#32.ttwu_local
18.00 Â 0% +48.6% 26.75 Â 29% sched_debug.cpu#33.load
3402872 Â 2% -19.0% 2756152 Â 2% sched_debug.cpu#33.nr_switches
3403056 Â 2% -19.0% 2757175 Â 2% sched_debug.cpu#33.sched_count
3384818 Â 2% -18.6% 2754433 Â 1% sched_debug.cpu#33.ttwu_count
3378162 Â 2% -19.1% 2731749 Â 2% sched_debug.cpu#33.ttwu_local
3374883 Â 2% -17.7% 2778380 Â 1% sched_debug.cpu#34.nr_switches
3375300 Â 2% -17.7% 2778950 Â 1% sched_debug.cpu#34.sched_count
3397835 Â 2% -18.5% 2769303 Â 1% sched_debug.cpu#34.ttwu_count
3363719 Â 2% -17.7% 2767598 Â 1% sched_debug.cpu#34.ttwu_local
14.00 Â 7% +32.1% 18.50 Â 14% sched_debug.cpu#35.cpu_load[0]
3383658 Â 2% -17.7% 2784568 Â 1% sched_debug.cpu#35.nr_switches
3384254 Â 2% -17.7% 2785577 Â 1% sched_debug.cpu#35.sched_count
972.00 Â 5% +91.4% 1860 Â 24% sched_debug.cpu#35.sched_goidle
3378070 Â 2% -17.7% 2779124 Â 1% sched_debug.cpu#35.ttwu_count
3374130 Â 3% -17.8% 2775164 Â 1% sched_debug.cpu#35.ttwu_local
3344567 Â 1% -17.0% 2776642 Â 2% sched_debug.cpu#36.nr_switches
3347603 Â 1% -17.0% 2777079 Â 2% sched_debug.cpu#36.sched_count
2011 Â 12% -40.7% 1192 Â 18% sched_debug.cpu#36.sched_goidle
3328996 Â 1% -16.7% 2774072 Â 2% sched_debug.cpu#36.ttwu_count
3326718 Â 1% -16.9% 2763776 Â 2% sched_debug.cpu#36.ttwu_local
17.00 Â 11% -20.6% 13.50 Â 12% sched_debug.cpu#37.cpu_load[1]
15.50 Â 9% -22.6% 12.00 Â 10% sched_debug.cpu#37.cpu_load[4]
3389585 Â 2% -18.0% 2779658 Â 2% sched_debug.cpu#37.nr_switches
3390637 Â 2% -18.0% 2779914 Â 2% sched_debug.cpu#37.sched_count
3386658 Â 3% -18.5% 2760062 Â 3% sched_debug.cpu#37.ttwu_count
3384129 Â 3% -18.5% 2756964 Â 2% sched_debug.cpu#37.ttwu_local
3361876 Â 2% -17.2% 2782005 Â 1% sched_debug.cpu#38.nr_switches
3362739 Â 2% -17.3% 2782445 Â 1% sched_debug.cpu#38.sched_count
3349858 Â 2% -16.9% 2784512 Â 2% sched_debug.cpu#38.ttwu_count
3347162 Â 2% -17.3% 2768586 Â 1% sched_debug.cpu#38.ttwu_local
522904 Â 39% -64.2% 186963 Â 20% sched_debug.cpu#39.avg_idle
3333390 Â 1% -15.6% 2815024 Â 2% sched_debug.cpu#39.nr_switches
-0.50 Â-100% -500.0% 2.00 Â 50% sched_debug.cpu#39.nr_uninterruptible
3333497 Â 1% -15.5% 2815476 Â 2% sched_debug.cpu#39.sched_count
610.00 Â 50% +126.5% 1381 Â 12% sched_debug.cpu#39.sched_goidle
3316557 Â 1% -15.1% 2816247 Â 1% sched_debug.cpu#39.ttwu_count
3315597 Â 1% -15.5% 2802276 Â 2% sched_debug.cpu#39.ttwu_local
19.00 Â 5% -19.7% 15.25 Â 8% sched_debug.cpu#4.cpu_load[1]
19.50 Â 7% -24.4% 14.75 Â 5% sched_debug.cpu#4.cpu_load[2]
19.50 Â 7% -25.6% 14.50 Â 5% sched_debug.cpu#4.cpu_load[3]
17.50 Â 8% -28.6% 12.50 Â 6% sched_debug.cpu#4.cpu_load[4]
3357236 Â 1% -17.4% 2772508 Â 2% sched_debug.cpu#4.nr_switches
3358957 Â 1% -17.4% 2773251 Â 2% sched_debug.cpu#4.sched_count
3347671 Â 1% -17.5% 2761172 Â 2% sched_debug.cpu#4.ttwu_count
3343875 Â 1% -17.6% 2755498 Â 2% sched_debug.cpu#4.ttwu_local
19.00 Â 5% -17.1% 15.75 Â 11% sched_debug.cpu#40.cpu_load[1]
18.50 Â 2% -17.6% 15.25 Â 5% sched_debug.cpu#40.cpu_load[2]
17.00 Â 5% -11.8% 15.00 Â 0% sched_debug.cpu#40.cpu_load[3]
16.00 Â 6% -23.4% 12.25 Â 3% sched_debug.cpu#40.cpu_load[4]
3393744 Â 2% -16.7% 2826272 Â 1% sched_debug.cpu#40.nr_switches
3394388 Â 2% -16.7% 2826849 Â 1% sched_debug.cpu#40.sched_count
3393412 Â 2% -16.4% 2837310 Â 2% sched_debug.cpu#40.ttwu_count
3391881 Â 2% -16.9% 2817881 Â 1% sched_debug.cpu#40.ttwu_local
17.50 Â 8% -20.0% 14.00 Â 7% sched_debug.cpu#41.cpu_load[0]
21.50 Â 25% -33.7% 14.25 Â 11% sched_debug.cpu#41.cpu_load[3]
20.50 Â 26% -36.6% 13.00 Â 16% sched_debug.cpu#41.cpu_load[4]
3392441 Â 3% -17.6% 2796813 Â 0% sched_debug.cpu#41.nr_switches
5.50 Â 27% -100.0% 0.00 Â 2% sched_debug.cpu#41.nr_uninterruptible
3392851 Â 3% -17.6% 2797077 Â 0% sched_debug.cpu#41.sched_count
3386583 Â 3% -18.1% 2773377 Â 0% sched_debug.cpu#41.ttwu_count
3384993 Â 3% -18.1% 2771410 Â 0% sched_debug.cpu#41.ttwu_local
3399088 Â 2% -17.3% 2811241 Â 1% sched_debug.cpu#42.nr_switches
3399572 Â 2% -17.3% 2811641 Â 1% sched_debug.cpu#42.sched_count
3398203 Â 2% -17.5% 2803836 Â 1% sched_debug.cpu#42.ttwu_count
3396488 Â 2% -17.5% 2801995 Â 1% sched_debug.cpu#42.ttwu_local
19.50 Â 12% -17.9% 16.00 Â 15% sched_debug.cpu#43.cpu_load[0]
3392351 Â 2% -17.0% 2814479 Â 2% sched_debug.cpu#43.nr_switches
3392695 Â 2% -17.0% 2815000 Â 2% sched_debug.cpu#43.sched_count
626.50 Â 4% +168.2% 1680 Â 51% sched_debug.cpu#43.sched_goidle
3392280 Â 2% -17.1% 2813877 Â 2% sched_debug.cpu#43.ttwu_count
3391155 Â 2% -17.3% 2805345 Â 2% sched_debug.cpu#43.ttwu_local
19.00 Â 15% -19.7% 15.25 Â 7% sched_debug.cpu#44.cpu_load[3]
17.50 Â 8% -22.9% 13.50 Â 6% sched_debug.cpu#44.cpu_load[4]
3329658 Â 4% -15.7% 2805446 Â 2% sched_debug.cpu#44.nr_switches
3330513 Â 4% -15.7% 2806109 Â 2% sched_debug.cpu#44.sched_count
2273 Â 15% -48.2% 1178 Â 49% sched_debug.cpu#44.sched_goidle
3301077 Â 4% -14.9% 2808499 Â 2% sched_debug.cpu#44.ttwu_count
3298549 Â 4% -15.5% 2788871 Â 2% sched_debug.cpu#44.ttwu_local
19.00 Â 15% -25.0% 14.25 Â 12% sched_debug.cpu#45.cpu_load[0]
17.50 Â 8% -12.9% 15.25 Â 7% sched_debug.cpu#45.cpu_load[1]
17.50 Â 2% -12.9% 15.25 Â 7% sched_debug.cpu#45.cpu_load[2]
17.00 Â 0% -17.6% 14.00 Â 8% sched_debug.cpu#45.cpu_load[3]
14.50 Â 3% -12.1% 12.75 Â 8% sched_debug.cpu#45.cpu_load[4]
3385958 Â 3% -16.8% 2816202 Â 2% sched_debug.cpu#45.nr_switches
3386858 Â 3% -16.8% 2817175 Â 2% sched_debug.cpu#45.sched_count
1506 Â 5% +58.3% 2384 Â 8% sched_debug.cpu#45.sched_goidle
3377144 Â 3% -17.0% 2802342 Â 2% sched_debug.cpu#45.ttwu_count
3374601 Â 3% -17.1% 2799191 Â 2% sched_debug.cpu#45.ttwu_local
18.00 Â 11% -20.8% 14.25 Â 13% sched_debug.cpu#46.cpu_load[1]
18.50 Â 8% -23.0% 14.25 Â 13% sched_debug.cpu#46.cpu_load[2]
18.50 Â 8% -27.0% 13.50 Â 15% sched_debug.cpu#46.cpu_load[3]
16.00 Â 6% -26.6% 11.75 Â 12% sched_debug.cpu#46.cpu_load[4]
3340344 Â 1% -15.3% 2828995 Â 2% sched_debug.cpu#46.nr_switches
3340631 Â 1% -15.3% 2829614 Â 2% sched_debug.cpu#46.sched_count
3321199 Â 0% -14.9% 2825366 Â 2% sched_debug.cpu#46.ttwu_count
3319829 Â 0% -15.0% 2823128 Â 2% sched_debug.cpu#46.ttwu_local
3376491 Â 3% -16.5% 2820818 Â 1% sched_debug.cpu#47.nr_switches
3377171 Â 3% -16.5% 2821388 Â 1% sched_debug.cpu#47.sched_count
3363508 Â 4% -16.2% 2819712 Â 1% sched_debug.cpu#47.ttwu_count
3360937 Â 4% -16.2% 2815610 Â 1% sched_debug.cpu#47.ttwu_local
23.00 Â 30% -27.2% 16.75 Â 18% sched_debug.cpu#48.cpu_load[0]
1593 Â 6% -8.6% 1456 Â 2% sched_debug.cpu#48.curr->pid
30.00 Â 16% -35.8% 19.25 Â 30% sched_debug.cpu#48.load
3353282 Â 2% -17.0% 2782329 Â 2% sched_debug.cpu#48.nr_switches
3353606 Â 2% -17.0% 2782524 Â 2% sched_debug.cpu#48.sched_count
1956 Â 50% -62.3% 738.50 Â 36% sched_debug.cpu#48.sched_goidle
3346441 Â 1% -16.6% 2792501 Â 2% sched_debug.cpu#48.ttwu_count
3342202 Â 1% -17.4% 2760527 Â 3% sched_debug.cpu#48.ttwu_local
21.00 Â 14% -25.0% 15.75 Â 20% sched_debug.cpu#49.cpu_load[0]
21.00 Â 4% -23.8% 16.00 Â 18% sched_debug.cpu#49.cpu_load[1]
20.00 Â 0% -25.0% 15.00 Â 16% sched_debug.cpu#49.cpu_load[3]
3334691 Â 3% -16.1% 2799376 Â 1% sched_debug.cpu#49.nr_switches
3334934 Â 3% -16.0% 2799954 Â 1% sched_debug.cpu#49.sched_count
853.00 Â 23% +71.2% 1460 Â 12% sched_debug.cpu#49.sched_goidle
3315622 Â 3% -15.8% 2790957 Â 2% sched_debug.cpu#49.ttwu_count
3313934 Â 3% -15.8% 2788906 Â 2% sched_debug.cpu#49.ttwu_local
3400135 Â 2% -18.1% 2785157 Â 2% sched_debug.cpu#5.nr_switches
-0.50 Â-300% +900.0% -5.00 Â-54% sched_debug.cpu#5.nr_uninterruptible
3401216 Â 2% -18.1% 2786330 Â 2% sched_debug.cpu#5.sched_count
2868 Â 2% -53.6% 1331 Â 37% sched_debug.cpu#5.sched_goidle
3388199 Â 3% -18.1% 2774308 Â 2% sched_debug.cpu#5.ttwu_count
3385592 Â 3% -18.1% 2771534 Â 2% sched_debug.cpu#5.ttwu_local
3362317 Â 3% -17.8% 2764907 Â 1% sched_debug.cpu#50.nr_switches
3362342 Â 3% -17.8% 2765068 Â 1% sched_debug.cpu#50.sched_count
290.00 Â 15% +169.6% 781.75 Â 36% sched_debug.cpu#50.sched_goidle
3358579 Â 4% -17.8% 2759976 Â 2% sched_debug.cpu#50.ttwu_count
3351864 Â 3% -18.5% 2732913 Â 2% sched_debug.cpu#50.ttwu_local
23.00 Â 13% -26.1% 17.00 Â 11% sched_debug.cpu#51.cpu_load[0]
3329504 Â 2% -16.3% 2787873 Â 2% sched_debug.cpu#51.nr_switches
3329749 Â 2% -16.3% 2788131 Â 2% sched_debug.cpu#51.sched_count
3313183 Â 2% -16.1% 2779415 Â 2% sched_debug.cpu#51.ttwu_count
3310242 Â 2% -16.1% 2776396 Â 2% sched_debug.cpu#51.ttwu_local
19.50 Â 17% +35.9% 26.50 Â 10% sched_debug.cpu#52.load
3380850 Â 2% -18.5% 2755852 Â 2% sched_debug.cpu#52.nr_switches
3381671 Â 2% -18.5% 2756388 Â 2% sched_debug.cpu#52.sched_count
1954 Â 23% -30.9% 1350 Â 48% sched_debug.cpu#52.sched_goidle
3379527 Â 2% -19.2% 2731052 Â 3% sched_debug.cpu#52.ttwu_count
3377028 Â 2% -19.4% 2720797 Â 3% sched_debug.cpu#52.ttwu_local
30.50 Â 1% -41.0% 18.00 Â 14% sched_debug.cpu#53.cpu_load[0]
30.50 Â 1% -44.3% 17.00 Â 11% sched_debug.cpu#53.cpu_load[1]
30.00 Â 6% -43.3% 17.00 Â 7% sched_debug.cpu#53.cpu_load[2]
28.00 Â 7% -43.8% 15.75 Â 8% sched_debug.cpu#53.cpu_load[3]
23.50 Â 6% -40.4% 14.00 Â 8% sched_debug.cpu#53.cpu_load[4]
1594 Â 6% -6.5% 1491 Â 3% sched_debug.cpu#53.curr->pid
3312192 Â 2% -15.6% 2796911 Â 2% sched_debug.cpu#53.nr_switches
3312691 Â 2% -15.6% 2797166 Â 2% sched_debug.cpu#53.sched_count
3298489 Â 2% -15.4% 2790891 Â 2% sched_debug.cpu#53.ttwu_count
3294908 Â 2% -15.5% 2784981 Â 2% sched_debug.cpu#53.ttwu_local
3349083 Â 2% -16.9% 2782148 Â 2% sched_debug.cpu#54.nr_switches
3349205 Â 2% -16.9% 2782549 Â 2% sched_debug.cpu#54.sched_count
3343122 Â 2% -15.9% 2812809 Â 1% sched_debug.cpu#54.ttwu_count
3342296 Â 2% -17.2% 2766815 Â 3% sched_debug.cpu#54.ttwu_local
21.50 Â 6% -24.4% 16.25 Â 22% sched_debug.cpu#55.cpu_load[0]
22.00 Â 4% -22.7% 17.00 Â 7% sched_debug.cpu#55.cpu_load[1]
22.00 Â 0% -25.0% 16.50 Â 3% sched_debug.cpu#55.cpu_load[2]
22.00 Â 4% -28.4% 15.75 Â 5% sched_debug.cpu#55.cpu_load[3]
21.00 Â 9% -34.5% 13.75 Â 6% sched_debug.cpu#55.cpu_load[4]
3363186 Â 2% -16.5% 2807090 Â 1% sched_debug.cpu#55.nr_switches
0.50 Â300% -450.0% -1.75 Â-47% sched_debug.cpu#55.nr_uninterruptible
3365372 Â 2% -16.6% 2807431 Â 1% sched_debug.cpu#55.sched_count
3362935 Â 2% -17.1% 2786327 Â 1% sched_debug.cpu#55.ttwu_count
3361040 Â 2% -17.3% 2780406 Â 1% sched_debug.cpu#55.ttwu_local
19.00 Â 10% -25.0% 14.25 Â 15% sched_debug.cpu#56.cpu_load[0]
19.50 Â 7% -20.5% 15.50 Â 5% sched_debug.cpu#56.cpu_load[1]
18.50 Â 8% -12.2% 16.25 Â 6% sched_debug.cpu#56.cpu_load[2]
3420766 Â 3% -17.8% 2811122 Â 2% sched_debug.cpu#56.nr_switches
3421009 Â 3% -17.8% 2811594 Â 2% sched_debug.cpu#56.sched_count
3414850 Â 3% -17.7% 2809656 Â 2% sched_debug.cpu#56.ttwu_count
3411940 Â 3% -17.8% 2804844 Â 2% sched_debug.cpu#56.ttwu_local
3404271 Â 3% -17.4% 2811744 Â 2% sched_debug.cpu#57.nr_switches
3404575 Â 3% -17.4% 2812005 Â 2% sched_debug.cpu#57.sched_count
3393002 Â 3% -16.8% 2822416 Â 2% sched_debug.cpu#57.ttwu_count
3390758 Â 3% -17.5% 2798177 Â 2% sched_debug.cpu#57.ttwu_local
184965 Â 2% +81.0% 334775 Â 61% sched_debug.cpu#58.avg_idle
3303870 Â 0% -15.1% 2805233 Â 2% sched_debug.cpu#58.nr_switches
3308111 Â 0% -15.2% 2805585 Â 2% sched_debug.cpu#58.sched_count
3310871 Â 1% -15.4% 2799389 Â 2% sched_debug.cpu#58.ttwu_count
3288466 Â 0% -15.0% 2796475 Â 1% sched_debug.cpu#58.ttwu_local
21.00 Â 14% -20.2% 16.75 Â 15% sched_debug.cpu#59.cpu_load[2]
20.50 Â 12% -20.7% 16.25 Â 14% sched_debug.cpu#59.cpu_load[3]
19.00 Â 10% -27.6% 13.75 Â 13% sched_debug.cpu#59.cpu_load[4]
2273 Â 33% -34.6% 1485 Â 3% sched_debug.cpu#59.curr->pid
3351441 Â 2% -16.9% 2784174 Â 1% sched_debug.cpu#59.nr_switches
3351777 Â 2% -16.9% 2784602 Â 1% sched_debug.cpu#59.sched_count
3367741 Â 2% -17.3% 2786119 Â 1% sched_debug.cpu#59.ttwu_count
3342847 Â 2% -17.2% 2767788 Â 1% sched_debug.cpu#59.ttwu_local
593061 Â 68% -69.2% 182397 Â 4% sched_debug.cpu#6.avg_idle
3406495 Â 2% -17.7% 2803402 Â 2% sched_debug.cpu#6.nr_switches
3407964 Â 2% -17.7% 2804520 Â 2% sched_debug.cpu#6.sched_count
3403729 Â 2% -17.9% 2794779 Â 2% sched_debug.cpu#6.ttwu_count
3400889 Â 3% -18.0% 2789570 Â 2% sched_debug.cpu#6.ttwu_local
3288065 Â 2% -14.8% 2799854 Â 1% sched_debug.cpu#60.nr_switches
3289461 Â 2% -14.9% 2800303 Â 1% sched_debug.cpu#60.sched_count
3265892 Â 2% -14.7% 2784997 Â 2% sched_debug.cpu#60.ttwu_count
3264174 Â 2% -14.7% 2783285 Â 2% sched_debug.cpu#60.ttwu_local
18.50 Â 8% -18.9% 15.00 Â 4% sched_debug.cpu#61.cpu_load[0]
18.50 Â 2% -18.9% 15.00 Â 6% sched_debug.cpu#61.cpu_load[1]
17.50 Â 2% -14.3% 15.00 Â 6% sched_debug.cpu#61.cpu_load[2]
17.00 Â 0% -14.7% 14.50 Â 5% sched_debug.cpu#61.cpu_load[3]
3387333 Â 3% -18.2% 2771097 Â 1% sched_debug.cpu#61.nr_switches
3387619 Â 3% -18.2% 2771881 Â 1% sched_debug.cpu#61.sched_count
1259 Â 10% +64.7% 2073 Â 18% sched_debug.cpu#61.sched_goidle
3385845 Â 3% -18.5% 2761058 Â 3% sched_debug.cpu#61.ttwu_count
3384417 Â 3% -19.1% 2738991 Â 2% sched_debug.cpu#61.ttwu_local
18.50 Â 8% -16.2% 15.50 Â 3% sched_debug.cpu#62.cpu_load[0]
18.50 Â 2% -16.2% 15.50 Â 7% sched_debug.cpu#62.cpu_load[1]
18.00 Â 0% -13.9% 15.50 Â 7% sched_debug.cpu#62.cpu_load[2]
18.50 Â 2% -16.2% 15.50 Â 9% sched_debug.cpu#62.cpu_load[3]
15.00 Â 0% -16.7% 12.50 Â 8% sched_debug.cpu#62.cpu_load[4]
3380210 Â 3% -17.0% 2806660 Â 1% sched_debug.cpu#62.nr_switches
3380290 Â 3% -17.0% 2807250 Â 1% sched_debug.cpu#62.sched_count
3366285 Â 4% -16.8% 2801102 Â 1% sched_debug.cpu#62.ttwu_count
3365076 Â 4% -16.8% 2799344 Â 1% sched_debug.cpu#62.ttwu_local
3361256 Â 3% -16.7% 2800949 Â 2% sched_debug.cpu#63.nr_switches
3361648 Â 3% -16.7% 2801160 Â 2% sched_debug.cpu#63.sched_count
1789 Â 9% -67.2% 587.25 Â 38% sched_debug.cpu#63.sched_goidle
3363922 Â 2% -17.0% 2792346 Â 2% sched_debug.cpu#63.ttwu_count
3343331 Â 3% -16.5% 2791028 Â 2% sched_debug.cpu#63.ttwu_local
3398381 Â 2% -17.1% 2815960 Â 1% sched_debug.cpu#7.nr_switches
3399690 Â 2% -17.1% 2817099 Â 1% sched_debug.cpu#7.sched_count
3392364 Â 2% -16.9% 2819988 Â 1% sched_debug.cpu#7.ttwu_count
3390381 Â 2% -17.2% 2806057 Â 2% sched_debug.cpu#7.ttwu_local
17.50 Â 2% -20.0% 14.00 Â 8% sched_debug.cpu#8.cpu_load[3]
15.00 Â 0% -18.3% 12.25 Â 12% sched_debug.cpu#8.cpu_load[4]
3395902 Â 2% -16.5% 2834133 Â 2% sched_debug.cpu#8.nr_switches
-3.50 Â-42% -92.9% -0.25 Â-519% sched_debug.cpu#8.nr_uninterruptible
3396181 Â 2% -16.5% 2834516 Â 2% sched_debug.cpu#8.sched_count
3387974 Â 2% -16.5% 2828023 Â 2% sched_debug.cpu#8.ttwu_count
3386284 Â 2% -16.7% 2822112 Â 1% sched_debug.cpu#8.ttwu_local
3403144 Â 3% -16.9% 2828970 Â 2% sched_debug.cpu#9.nr_switches
-1.50 Â-100% -300.0% 3.00 Â 47% sched_debug.cpu#9.nr_uninterruptible
3403390 Â 3% -16.9% 2829633 Â 2% sched_debug.cpu#9.sched_count
3401228 Â 3% -17.0% 2822914 Â 1% sched_debug.cpu#9.ttwu_count
3399884 Â 3% -17.1% 2819319 Â 2% sched_debug.cpu#9.ttwu_local
0.16 Â 11% -73.7% 0.04 Â173% sched_debug.rt_rq[25]:/.rt_time
1.92 Â 99% -100.0% 0.00 Â -1% sched_debug.rt_rq[60]:/.rt_time