Re: [PATCH v3 07/14] sched/core: uclamp: enforce last task UCLAMP_MAX

From: Dietmar Eggemann
Date: Thu Aug 16 2018 - 11:43:35 EST


On 08/06/2018 06:39 PM, Patrick Bellasi wrote:
When a util_max clamped task sleeps, its clamp constraints are removed
from the CPU. However, the blocked utilization on that CPU can still be
higher than the max clamp value enforced while that task was running.
This max clamp removal when a CPU is going to be idle could thus allow
unwanted CPU frequency increases, right while the task is not running.

So 'rq->uclamp.flags == UCLAMP_FLAG_IDLE' means CPU is IDLE because
non-clamped tasks are tracked as well ((group_id = 0)).

Maybe this is worth mentioning here?

This can happen, for example, where there is another (smaller) task
running on a different CPU of the same frequency domain.
In this case, when we aggregate the utilization of all the CPUs in a
shared frequency domain, schedutil can still see the full non clamped
blocked utilization of all the CPUs and thus eventually increase the
frequency.

Let's fix this by using:

uclamp_cpu_put_id(UCLAMP_MAX)
uclamp_cpu_update(last_clamp_value)

to detect when a CPU has no more RUNNABLE clamped tasks and to flag this
condition. Thus, while a CPU is idle, we can still enforce the last used
clamp value for it.

To the contrary, we do not track any UCLAMP_MIN since, while a CPU is
idle, we don't want to enforce any minimum frequency
Indeed, we rely just on blocked load decay to smoothly reduce the
frequency.

[...]

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bc2beedec7bf..ff76b000bbe8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -906,7 +906,8 @@ uclamp_group_find(int clamp_id, unsigned int clamp_value)
* For the specified clamp index, this method computes the new CPU utilization
* clamp to use until the next change on the set of RUNNABLE tasks on that CPU.
*/
-static inline void uclamp_cpu_update(struct rq *rq, int clamp_id)
+static inline void uclamp_cpu_update(struct rq *rq, int clamp_id,
+ unsigned int last_clamp_value)
{
struct uclamp_group *uc_grp = &rq->uclamp.group[clamp_id][0];
int max_value = UCLAMP_NOT_VALID;
@@ -924,6 +925,19 @@ static inline void uclamp_cpu_update(struct rq *rq, int clamp_id)

The condition:

if (!uclamp_group_active(uc_grp, group_id))
continue;

in 'for (group_id = 0; group_id <= CONFIG_UCLAMP_GROUPS_COUNT; ++group_id) {}' makes sure that 'max_value == UCLAMP_NOT_VALID' is true for the if condition (*):


if (max_value >= SCHED_CAPACITY_SCALE)
break;
}
+
+ /*
+ * Just for the UCLAMP_MAX value, in case there are no RUNNABLE
+ * task, we keep the CPU clamped to the last task's clamp value.
+ * This avoids frequency spikes to MAX when one CPU, with an high
+ * blocked utilization, sleeps and another CPU, in the same frequency
+ * domain, do not see anymore the clamp on the first CPU.
+ */
+ if (clamp_id == UCLAMP_MAX && max_value == UCLAMP_NOT_VALID) {
+ rq->uclamp.flags |= UCLAMP_FLAG_IDLE;
+ max_value = last_clamp_value;
+ }
+

(*): So the uc_grp[group_id].value stays last_clamp_value?

What do you do when the blocked utilization decays below this enforced last_clamp_value on that CPU?

I assume there are plenty of this kind of corner cases because we have blocked signals (including all tasks) and clamping (including runnable tasks).