Re: [RFC PATCH v4 00/19] Core scheduling v4

From: Aaron Lu
Date: Wed Mar 04 2020 - 23:33:49 EST


On Wed, Mar 04, 2020 at 07:54:39AM +0800, Li, Aubrey wrote:
> On 2020/3/3 22:59, Li, Aubrey wrote:
> > On 2020/2/29 7:55, Tim Chen wrote:
...
> >> In Vinnet's fix, we only look at the currently running task's weight in
> >> src and dst rq. Perhaps the load on the src and dst rq needs to be considered
> >> to prevent too great an imbalance between the run queues?
> >
> > We are trying to migrate a task, can we just use cfs.h_nr_running? This signal
> > is used to find the busiest run queue as well.
>
> How about this one? the cgroup weight issue seems fixed on my side.

It doesn't apply on top of your coresched_v4-v5.5.2 branch, so I
manually allied it. Not sure if I missed something.

It's now getting 4 cpus in 2 cores. Better, but not back to normal yet..

Thanks,
Aaron

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index f42ceec..90024cf 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1767,6 +1767,8 @@ static void task_numa_compare(struct task_numa_env *env,
> rcu_read_unlock();
> }
>
> +static inline bool sched_core_cookie_match(struct rq *rq, struct task_struct *p);
> +
> static void task_numa_find_cpu(struct task_numa_env *env,
> long taskimp, long groupimp)
> {
> @@ -5650,6 +5652,44 @@ static struct sched_group *
> find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> int this_cpu, int sd_flag);
>
> +#ifdef CONFIG_SCHED_CORE
> +static inline bool sched_core_cookie_match(struct rq *rq, struct task_struct *p)
> +{
> + struct rq *src_rq = task_rq(p);
> + bool idle_core = true;
> + int cpu;
> +
> + /* Ignore cookie match if core scheduler is not enabled on the CPU. */
> + if (!sched_core_enabled(rq))
> + return true;
> +
> + if (rq->core->core_cookie == p->core_cookie)
> + return true;
> +
> + for_each_cpu(cpu, cpu_smt_mask(cpu_of(rq))) {
> + if (!available_idle_cpu(cpu)) {
> + idle_core = false;
> + break;
> + }
> + }
> + /*
> + * A CPU in an idle core is always the best choice for tasks with
> + * cookies.
> + */
> + if (idle_core)
> + return true;
> +
> + /*
> + * Ignore cookie match if there is a big imbalance between the src rq
> + * and dst rq.
> + */
> + if ((src_rq->cfs.h_nr_running - rq->cfs.h_nr_running) > 1)
> + return true;
> +
> + return false;
> +}
> +#endif
> +
> /*
> * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
> */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 7ae6858..8c607e9 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1061,28 +1061,6 @@ static inline raw_spinlock_t *rq_lockp(struct rq *rq)
> return &rq->__lock;
> }
>
> -static inline bool sched_core_cookie_match(struct rq *rq, struct task_struct *p)
> -{
> - bool idle_core = true;
> - int cpu;
> -
> - /* Ignore cookie match if core scheduler is not enabled on the CPU. */
> - if (!sched_core_enabled(rq))
> - return true;
> -
> - for_each_cpu(cpu, cpu_smt_mask(cpu_of(rq))) {
> - if (!available_idle_cpu(cpu)) {
> - idle_core = false;
> - break;
> - }
> - }
> - /*
> - * A CPU in an idle core is always the best choice for tasks with
> - * cookies.
> - */
> - return idle_core || rq->core->core_cookie == p->core_cookie;
> -}
> -
> extern void queue_core_balance(struct rq *rq);
>
> void sched_core_add(struct rq *rq, struct task_struct *p);