Re: [PATCH v3 04/10] sched/fair: Let low-priority cores help high-priority busy SMT cores

From: Ricardo Neri
Date: Mon Feb 13 2023 - 18:13:41 EST


On Mon, Feb 13, 2023 at 02:40:24PM +0100, Dietmar Eggemann wrote:
> On 07/02/2023 05:58, Ricardo Neri wrote:
>
> [...]
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 80c86462c6f6..c9d0ddfd11f2 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10436,11 +10436,20 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> > nr_running == 1)
> > continue;
> >
> > - /* Make sure we only pull tasks from a CPU of lower priority */
> > + /*
> > + * Make sure we only pull tasks from a CPU of lower priority
> > + * when balancing between SMT siblings.
> > + *
> > + * If balancing between cores, let lower priority CPUs help
> > + * SMT cores with more than one busy sibling.
> > + */
> > if ((env->sd->flags & SD_ASYM_PACKING) &&
> > sched_asym_prefer(i, env->dst_cpu) &&
> > - nr_running == 1)
> > - continue;
> > + nr_running == 1) {
> > + if (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> > + (!(env->sd->flags & SD_SHARE_CPUCAPACITY) && is_core_idle(i)))
> > + continue;
>
> is_core_idle(i) returns true for !CONFIG_SCHED_SMP. So far it was always
> guarded by `flags & SD_SHARE_CPUCAPACITY` which is only set for
> CONFIG_SCHED_SMP.
>
> Here it's different but still depends on `flags & SD_ASYM_PACKING`.
>
> Can we have SD_ASYM_PACKING w/o CONFIG_SCHED_SMP? The comment just says
> `If balancing between cores (MC), let lower priority CPUs help SMT cores
> with more than one busy sibling.`

We cannot have SD_ASYM_PACKING w/o CONFIG_SCHED_SMP. We may have it without
CONFIG_SCHED_SMT. In the latter case we want is_core_idle() to return true
as there are no SMT siblings competing for core throughput and CPU priority
is meaningful. I can add an extra comment clarifying the !CONFIG_SCHED_SMT /

>
> So this only mentions your specific asymmetric e-cores w/o SMT and
> p-cores w/ SMT case.
>
> I'm asking since numa_idle_core(), the only user of is_core_idle() so
> far has an extra `!static_branch_likely(&sched_smt_present)` condition
> before calling it.

That is a good point. Calling is_core_idle() is pointless if
!static_branch_likely(&sched_smt_present).

As per feedback from Vincent and Peter, I have put this logic in a helper
function. I'll add an extra check for this static key.

>
> > + }
> >
> > switch (env->migration_type) {
> > case migrate_load:
> > @@ -10530,8 +10539,20 @@ asym_active_balance(struct lb_env *env)
> > * lower priority CPUs in order to pack all tasks in the
> > * highest priority CPUs.
> > */
> > - return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) &&
> > - sched_asym_prefer(env->dst_cpu, env->src_cpu);
> > + if (env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING)) {
> > + /* Always obey priorities between SMT siblings. */
> > + if (env->sd->flags & SD_SHARE_CPUCAPACITY)
> > + return sched_asym_prefer(env->dst_cpu, env->src_cpu);
> > +
> > + /*
> > + * A lower priority CPU can help an SMT core with more than one
> > + * busy sibling.
> > + */
> > + return sched_asym_prefer(env->dst_cpu, env->src_cpu) ||
> > + !is_core_idle(env->src_cpu);
>
> Here it is similar.

I will use my helper function here as well.

Thanks and BR,
Ricardo