Re: [PATCH] sched/fair: Optimize select_idle_cpu

From: chengjian (D)
Date: Fri Dec 13 2019 - 04:57:27 EST



On 2019/12/12 23:24, Peter Zijlstra wrote:
On Thu, Dec 12, 2019 at 10:41:02PM +0800, Cheng Jian wrote:

Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")
The 'funny' thing is that select_idle_core() actually does the right
thing.

Copying that should work:


diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e97a01..416d574dcebf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5828,6 +5837,7 @@ static inline int select_idle_smt(struct task_struct *p, int target)
*/
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
struct sched_domain *this_sd;
u64 avg_cost, avg_idle;
u64 time, cost;
@@ -5859,11 +5869,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
time = cpu_clock(this);
- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+
+ for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return si_cpu;
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
- continue;
if (available_idle_cpu(cpu))
break;
if (si_cpu == -1 && sched_idle_cpu(cpu))

.


in select_idle_smt()

/*
Â* Scan the local SMT mask for idle CPUs.
Â*/
static int select_idle_smt(struct task_struct *p, int target)
{
ÂÂÂ int cpu, si_cpu = -1;

ÂÂÂ if (!static_branch_likely(&sched_smt_present))
ÂÂÂÂÂÂÂ return -1;

ÂÂÂ for_each_cpu(cpu, cpu_smt_mask(target)) {
ÂÂÂÂÂÂÂ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
ÂÂÂÂÂÂÂÂÂÂÂ continue;
ÂÂÂÂÂÂÂ if (available_idle_cpu(cpu))
ÂÂÂÂÂÂÂÂÂÂÂ return cpu;
ÂÂÂÂÂÂÂ if (si_cpu == -1 && sched_idle_cpu(cpu))
ÂÂÂÂÂÂÂÂÂÂÂ si_cpu = cpu;
ÂÂÂ }

ÂÂÂ return si_cpu;
}


Why don't we do the same thing in this function,

although cpu_smt_present () often has few CPUs.

it is better to determine the 'p->cpus_ptr' first.