Re: [PATCH 1/2] sched/fair: Fix cfs_rq_clock_pelt() for throttled cfs_rq

From: Vincent Guittot
Date: Fri Apr 08 2022 - 03:18:08 EST


On Thu, 7 Apr 2022 at 04:17, Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> wrote:
>
> Since commit 23127296889f ("sched/fair: Update scale invariance of PELT")
> change to use rq_clock_pelt() instead of rq_clock_task(), we should also
> use rq_clock_pelt() for throttled_clock_task_time and throttled_clock_task
> accounting.
>
> Fixes: 23127296889f ("sched/fair: Update scale invariance of PELT")
> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>

Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>

> ---
> kernel/sched/fair.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..e6fa5d1141b4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4846,7 +4846,7 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>
> cfs_rq->throttle_count--;
> if (!cfs_rq->throttle_count) {
> - cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> + cfs_rq->throttled_clock_task_time += rq_clock_pelt(rq) -
> cfs_rq->throttled_clock_task;
>
> /* Add cfs_rq with load or one or more already running entities to the list */
> @@ -4864,7 +4864,7 @@ static int tg_throttle_down(struct task_group *tg, void *data)
>
> /* group is entering throttled state, stop time */
> if (!cfs_rq->throttle_count) {
> - cfs_rq->throttled_clock_task = rq_clock_task(rq);
> + cfs_rq->throttled_clock_task = rq_clock_pelt(rq);
> list_del_leaf_cfs_rq(cfs_rq);
> }
> cfs_rq->throttle_count++;
> @@ -5308,7 +5308,7 @@ static void sync_throttle(struct task_group *tg, int cpu)
> pcfs_rq = tg->parent->cfs_rq[cpu];
>
> cfs_rq->throttle_count = pcfs_rq->throttle_count;
> - cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
> + cfs_rq->throttled_clock_task = rq_clock_pelt(cpu_rq(cpu));
> }
>
> /* conditionally throttle active cfs_rq's from put_prev_entity() */
> --
> 2.35.1
>