Re: [PATCH v3 04/10] sched/fair: Let low-priority cores help high-priority busy SMT cores

From: Ricardo Neri
Date: Thu Feb 09 2023 - 20:42:41 EST


On Wed, Feb 08, 2023 at 08:56:32AM +0100, Vincent Guittot wrote:
> On Tue, 7 Feb 2023 at 05:50, Ricardo Neri
> <ricardo.neri-calderon@xxxxxxxxxxxxxxx> wrote:
> >
> > Using asym_packing priorities within an SMT core is straightforward. Just
> > follow the priorities that hardware indicates.
> >
> > When balancing load from an SMT core, also consider the idle of its
> > siblings. Priorities do not reflect that an SMT core divides its throughput
> > among all its busy siblings. They only makes sense when exactly one sibling
> > is busy.
> >
> > Indicate that active balance is needed if the destination CPU has lower
> > priority than the source CPU but the latter has busy SMT siblings.
> >
> > Make find_busiest_queue() not skip higher-priority SMT cores with more than
> > busy sibling.
> >
> > Cc: Ben Segall <bsegall@xxxxxxxxxx>
> > Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
> > Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> > Cc: Len Brown <len.brown@xxxxxxxxx>
> > Cc: Mel Gorman <mgorman@xxxxxxx>
> > Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
> > Cc: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>
> > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> > Cc: Tim C. Chen <tim.c.chen@xxxxxxxxx>
> > Cc: Valentin Schneider <vschneid@xxxxxxxxxx>
> > Cc: x86@xxxxxxxxxx
> > Cc: linux-kernel@xxxxxxxxxxxxxxx
> > Suggested-by: Valentin Schneider <vschneid@xxxxxxxxxx>
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> > ---
> > Changes since v2:
> > * Introduced this patch.
> >
> > Changes since v1:
> > * N/A
> > ---
> > kernel/sched/fair.c | 31 ++++++++++++++++++++++++++-----
> > 1 file changed, 26 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 80c86462c6f6..c9d0ddfd11f2 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10436,11 +10436,20 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> > nr_running == 1)
> > continue;
> >
> > - /* Make sure we only pull tasks from a CPU of lower priority */
> > + /*
> > + * Make sure we only pull tasks from a CPU of lower priority
> > + * when balancing between SMT siblings.
> > + *
> > + * If balancing between cores, let lower priority CPUs help
> > + * SMT cores with more than one busy sibling.
> > + */
> > if ((env->sd->flags & SD_ASYM_PACKING) &&
> > sched_asym_prefer(i, env->dst_cpu) &&
> > - nr_running == 1)
> > - continue;
> > + nr_running == 1) {
> > + if (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> > + (!(env->sd->flags & SD_SHARE_CPUCAPACITY) && is_core_idle(i)))
>
> This 2nd if could be merged with the upper one
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10518,11 +10518,10 @@ static struct rq *find_busiest_queue(struct
> lb_env *env,
> */
> if ((env->sd->flags & SD_ASYM_PACKING) &&
> sched_asym_prefer(i, env->dst_cpu) &&
> - nr_running == 1) {
> - if (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> - (!(env->sd->flags & SD_SHARE_CPUCAPACITY)
> && is_core_idle(i)))
> + (nr_running == 1) &&
> + (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> + (!(env->sd->flags & SD_SHARE_CPUCAPACITY)
> && is_core_idle(i))))
> continue;
> - }
>
> switch (env->migration_type) {
> case migrate_load:
> ---
>
> AFAICT, you can even remove one env->sd->flags & SD_SHARE_CPUCAPACITY
> test with the below but this make the condition far less obvious
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a6021af9de11..7dfa30c45327 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10518,11 +10518,10 @@ static struct rq *find_busiest_queue(struct
> lb_env *env,
> */
> if ((env->sd->flags & SD_ASYM_PACKING) &&
> sched_asym_prefer(i, env->dst_cpu) &&
> - nr_running == 1) {
> - if (env->sd->flags & SD_SHARE_CPUCAPACITY ||
> - (!(env->sd->flags & SD_SHARE_CPUCAPACITY)
> && is_core_idle(i)))
> + (nr_running == 1) &&
> + !(!(env->sd->flags & SD_SHARE_CPUCAPACITY) &&
> + !is_core_idle(i)))
> continue;

I agree. This expression is equivalent to what I proposed. It is less
obvious but the comment above clarifies what is going on. I will take
your suggestion.

Thanks and BR,
Ricardo