[PATCH v2 10/10] sched/fair: delete superfluous set_task_rq_fair()

From: Chengming Zhou
Date: Wed Jul 13 2022 - 00:06:03 EST


set_task_rq() is used when move task across CPUs/groups to change
its cfs_rq and parent entity, and it will call set_task_rq_fair()
to sync blocked task load_avg just before change its cfs_rq.

1. task migrate CPU: will detach/remove from prev cfs_rq and reset
its sched_avg last_update_time to 0, so don't need to sync again.

2. task migrate cgroup: will detach from prev cfs_rq and reset its
sched_avg last_update_time to 0, so don't need to sync too.

3. !fair task migrate CPU/cgroup: we stop load tracking for !fair task,
reset sched_avg last_update_time to 0 when switched_from_fair(), so
don't need it too.

So set_task_rq_fair() is not needed anymore, this patch delete it.
And delete unused ATTACH_AGE_LOAD feature together.

Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
---
kernel/sched/fair.c | 31 -------------------------------
kernel/sched/features.h | 1 -
kernel/sched/sched.h | 8 --------
3 files changed, 40 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 576028f5a09e..b435eda88468 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3430,37 +3430,6 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
}
}

-/*
- * Called within set_task_rq() right before setting a task's CPU. The
- * caller only guarantees p->pi_lock is held; no other assumptions,
- * including the state of rq->lock, should be made.
- */
-void set_task_rq_fair(struct sched_entity *se,
- struct cfs_rq *prev, struct cfs_rq *next)
-{
- u64 p_last_update_time;
- u64 n_last_update_time;
-
- if (!sched_feat(ATTACH_AGE_LOAD))
- return;
-
- /*
- * We are supposed to update the task to "current" time, then its up to
- * date and ready to go to new CPU/cfs_rq. But we have difficulty in
- * getting what current time is, so simply throw away the out-of-date
- * time. This will result in the wakee task is less decayed, but giving
- * the wakee more load sounds not bad.
- */
- if (!(se->avg.last_update_time && prev))
- return;
-
- p_last_update_time = cfs_rq_last_update_time(prev);
- n_last_update_time = cfs_rq_last_update_time(next);
-
- __update_load_avg_blocked_se(p_last_update_time, se);
- se->avg.last_update_time = n_last_update_time;
-}
-
/*
* When on migration a sched_entity joins/leaves the PELT hierarchy, we need to
* propagate its contribution. The key to this propagation is the invariant
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index ee7f23c76bd3..fb92431d496f 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -85,7 +85,6 @@ SCHED_FEAT(RT_PUSH_IPI, true)

SCHED_FEAT(RT_RUNTIME_SHARE, false)
SCHED_FEAT(LB_MIN, false)
-SCHED_FEAT(ATTACH_AGE_LOAD, true)

SCHED_FEAT(WA_IDLE, true)
SCHED_FEAT(WA_WEIGHT, true)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 19e0076e4245..a8ec7af4bd51 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -505,13 +505,6 @@ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);

extern int sched_group_set_idle(struct task_group *tg, long idle);

-#ifdef CONFIG_SMP
-extern void set_task_rq_fair(struct sched_entity *se,
- struct cfs_rq *prev, struct cfs_rq *next);
-#else /* !CONFIG_SMP */
-static inline void set_task_rq_fair(struct sched_entity *se,
- struct cfs_rq *prev, struct cfs_rq *next) { }
-#endif /* CONFIG_SMP */
#endif /* CONFIG_FAIR_GROUP_SCHED */

#else /* CONFIG_CGROUP_SCHED */
@@ -1937,7 +1930,6 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
#endif

#ifdef CONFIG_FAIR_GROUP_SCHED
- set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
p->se.cfs_rq = tg->cfs_rq[cpu];
p->se.parent = tg->se[cpu];
p->se.depth = tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0;
--
2.36.1