Re: [PATCH v2 2/5] sched/numa: Replace runnable_load_avg by load_avg

From: Dietmar Eggemann
Date: Tue Feb 18 2020 - 07:38:01 EST


On 14/02/2020 16:27, Vincent Guittot wrote:

[...]

> /*
> * The load is corrected for the CPU capacity available on each node.
> *
> @@ -1788,10 +1831,10 @@ static int task_numa_migrate(struct task_struct *p)
> dist = env.dist = node_distance(env.src_nid, env.dst_nid);
> taskweight = task_weight(p, env.src_nid, dist);
> groupweight = group_weight(p, env.src_nid, dist);
> - update_numa_stats(&env.src_stats, env.src_nid);
> + update_numa_stats(&env, &env.src_stats, env.src_nid);

This looks strange. Can you do:

-static void update_numa_stats(struct task_numa_env *env,
+static void update_numa_stats(unsigned int imbalance_pct,
struct numa_stats *ns, int nid)

- update_numa_stats(&env, &env.src_stats, env.src_nid);
+ update_numa_stats(env.imbalance_pct, &env.src_stats, env.src_nid);

[...]

> +static unsigned long cpu_runnable_load(struct rq *rq)
> +{
> + return cfs_rq_runnable_load_avg(&rq->cfs);
> +}
> +

Why not remove cpu_runnable_load() in this patch rather moving it?

kernel/sched/fair.c:5492:22: warning: âcpu_runnable_loadâ defined but
not used [-Wunused-function]
static unsigned long cpu_runnable_load(struct rq *rq)