Re: [PATCH v2] sched: Consolidate cpufreq updates

From: Qais Yousef
Date: Tue May 07 2024 - 06:42:20 EST


On 05/07/24 10:02, Peter Zijlstra wrote:
> On Tue, May 07, 2024 at 01:56:59AM +0100, Qais Yousef wrote:
>
> > Yes. How about this? Since stopper class appears as RT, we should still check
> > for this class specifically.
>
> Much nicer!
>
> > static inline void update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
> > {
> > #ifdef CONFIG_CPU_FREQ
> > if (likely(fair_policy(current->policy))) {
> >
> > if (unlikely(current->in_iowait)) {
> > cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
> > return;
> > }
> >
> > #ifdef CONFIG_SMP
> > /*
> > * Allow cpufreq updates once for every update_load_avg() decay.
> > */
> > if (unlikely(rq->cfs.decayed)) {
> > rq->cfs.decayed = false;
> > cpufreq_update_util(rq, 0);
> > return;
> > }
> > #endif
> > return;
> > }
> >
> > /*
> > * RT and DL should always send a freq update. But we can do some
> > * simple checks to avoid it when we know it's not necessary.
> > */
> > if (task_is_realtime(current)) {
> > if (dl_task(current) && current->dl.flags & SCHED_FLAG_SUGOV) {
> > /* Ignore sugov kthreads, they're responding to our requests */
> > return;
> > }
> >
> > if (rt_task(current) && rt_task(prev)) {
>
> doesn't task_is_realtime() impy rt_task() ?
>
> Also, this clause still includes DL tasks, is that okay?

Ugh, yes. The earlier check for dl_task() is not good enough. I should send
a patch to fix the definition of rt_task()!

I think at this stage open coding the policy check with a switch statement
is the best thing to do

static inline void update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
{
#ifdef CONFIG_CPU_FREQ
/*
* RT and DL should always send a freq update. But we can do some
* simple checks to avoid it when we know it's not necessary.
*
* iowait_boost will always trigger a freq update too.
*
* Fair tasks will only trigger an update if the root cfs_rq has
* decayed.
*
* Everything else should do nothing.
*/
switch (current->policy) {
case SCHED_NORMAL:
case SCHED_BATCH:
if (unlikely(current->in_iowait)) {
cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
return;
}

#ifdef CONFIG_SMP
if (unlikely(rq->cfs.decayed)) {
rq->cfs.decayed = false;
cpufreq_update_util(rq, 0);
return;
}
#endif
return;
case SCHED_FIFO:
case SCHED_RR:
if (rt_policy(prev)) {
#ifdef CONFIG_UCLAMP_TASK
unsigned long curr_uclamp_min = uclamp_eff_value(current, UCLAMP_MIN);
unsigned long prev_uclamp_min = uclamp_eff_value(prev, UCLAMP_MIN);

if (curr_uclamp_min == prev_uclamp_min)
#endif
return;
}
#ifdef CONFIG_SMP
/* Stopper task masquerades as RT */
if (unlikely(current->sched_class == &stop_sched_class))
return;
#endif
cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
return;
case SCHED_DEADLINE:
if (current->dl.flags & SCHED_FLAG_SUGOV) {
/* Ignore sugov kthreads, they're responding to our requests */
return;
}
cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
return;
default:
return;
}
#endif
}