Re: [patch v5 14/15] sched: power aware load balance

From: Preeti U Murthy
Date: Thu Mar 21 2013 - 04:43:12 EST


Hi Alex,

On 03/21/2013 01:13 PM, Alex Shi wrote:
> On 03/20/2013 12:57 PM, Preeti U Murthy wrote:
>> Neither core will be able to pull the task from the other to consolidate
>> the load because the rq->util of t2 and t4, on which no process is
>> running, continue to show some number even though they degrade with time
>> and sgs->utils accounts for them. Therefore,
>> for core1 and core2, the sgs->utils will be slightly above 100 and the
>> above condition will fail, thus failing them as candidates for
>> group_leader,since threshold_util will be 200.
>
> Thanks for note, Preeti!
>
> Did you find some real issue in some platform?
> In theory, a totally idle cpu has a zero rq->util at least after 3xxms,
> and in fact, I fi
nd the code works fine on my machines.
>

Yes, I did find this behaviour on a 2 socket, 8 core machine very
consistently.

rq->util cannot go to 0, after it has begun accumulating load right?

Say a load was running on a runqueue which had its rq->util to be at
100%. After the load finishes, the runqueue goes idle. For every
scheduler tick, its utilisation decays. But can never become 0.

rq->util = rq->avg.runnable_avg_sum/rq->avg.runnable_avg_period

This ratio will come close to 0, but will never become 0 once it has
picked up a value.So if a sched_group consists of two run queues,one
having utilisation 100, running 1 load and the other having utilisation
.001,but running no load,then in update_sd_lb_power_stats(), the below
condition

"sgs->group_utils + FULL_UTIL > threshold_util " will turn out to be

(100.001 + 100 > 200) and hence the group will fail to act as the group
leader,to take on more tasks onto itself.


Regards
Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/