[PATCH v4 0/9] sched/fair: task load tracking optimization and cleanup

From: Chengming Zhou
Date: Mon Aug 08 2022 - 08:58:08 EST


Hi all,

This patch series is optimization and cleanup for task load tracking when
task migrate CPU/cgroup or switched_from/to_fair(), based on tip/sched/core.

There are three types of detach/attach_entity_load_avg (except fork and exit)
for a fair task:
1. task migrate CPU (on_rq migrate or wake_up migrate)
2. task migrate cgroup (detach and attach)
3. task switched_from/to_fair (detach later attach)

patch 1-3 cleanup the task change cgroup case by remove cpu_cgrp_subsys->fork(),
since we already do the same thing in sched_cgroup_fork().

patch 5/9 optimize the task migrate CPU case by combine detach into dequeue.

patch 6/9 fix another detach on unattached task case which has been woken up
by try_to_wake_up() but is waiting for actually being woken up by
sched_ttwu_pending().

patch 7/9 remove unnecessary limitation that we would fail when change
cgroup of forked task which hasn't been woken up by wake_up_new_task().

patch 8-9 optimize post_init_entity_util_avg() for fair task and skip
setting util_avg and runnable_avg for !fair task at the fork time.

Thanks!

Changes in v4:
- Drop detach/attach_entity_cfs_rq() refactor patch in the last version.
- Move new forked task check to task_change_group_fair().

Changes in v3:
- One big change is this series don't freeze PELT sum/avg values to be
used as initial values when re-entering fair any more, since these
PELT values become much less relevant.
- Reorder patches and collect tags from Vincent and Dietmar. Thanks!
- Fix detach on unattached task which has been woken up by try_to_wake_up()
but is waiting for actually being woken up by sched_ttwu_pending().
- Delete TASK_NEW which limit forked task from changing cgroup.
- Don't init util_avg and runnable_avg for !fair taks at fork time.

Changes in v2:
- split task se depth maintenance into a separate patch3, suggested
by Peter.
- reorder patch6-7 before patch8-9, since we need update_load_avg()
to do conditional attach/detach to avoid corner cases like twice
attach problem.

Chengming Zhou (9):
sched/fair: maintain task se depth in set_task_rq()
sched/fair: remove redundant cpu_cgrp_subsys->fork()
sched/fair: reset sched_avg last_update_time before set_task_rq()
sched/fair: update comments in enqueue/dequeue_entity()
sched/fair: combine detach into dequeue when migrating task
sched/fair: fix another detach on unattached task corner case
sched/fair: allow changing cgroup of new forked task
sched/fair: defer task sched_avg attach to enqueue_entity()
sched/fair: don't init util/runnable_avg for !fair task

include/linux/sched.h | 5 +-
kernel/sched/core.c | 57 ++--------
kernel/sched/fair.c | 234 ++++++++++++++++++++----------------------
kernel/sched/sched.h | 6 +-
4 files changed, 124 insertions(+), 178 deletions(-)

--
2.36.1