[PATCH 3.12 013/181] sched: Make scale_rt_power() deal with backward clocks

From: Jiri Slaby
Date: Mon Jun 30 2014 - 07:59:56 EST


From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>

3.12-stable review patch. If anyone has any objections, please let me know.

===============

commit cadefd3d6cc914d95163ba1eda766bfe7ce1e5b7 upstream.

Mike reported that, while unlikely, its entirely possible for
scale_rt_power() to see the time go backwards. This yields rather
'interesting' results.

So like all other sites that deal with clocks; make this one ignore
backward clock movement too.

Reported-by: Mike Galbraith <bitbucket@xxxxxxxxx>
Signed-off-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/20140227094035.GZ9987@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Signed-off-by: Jiri Slaby <jslaby@xxxxxxx>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 25658d2c68d0..898622244bdf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4404,6 +4404,7 @@ static unsigned long scale_rt_power(int cpu)
{
struct rq *rq = cpu_rq(cpu);
u64 total, available, age_stamp, avg;
+ s64 delta;

/*
* Since we're reading these variables without serialization make sure
@@ -4412,7 +4413,11 @@ static unsigned long scale_rt_power(int cpu)
age_stamp = ACCESS_ONCE(rq->age_stamp);
avg = ACCESS_ONCE(rq->rt_avg);

- total = sched_avg_period() + (rq_clock(rq) - age_stamp);
+ delta = rq_clock(rq) - age_stamp;
+ if (unlikely(delta < 0))
+ delta = 0;
+
+ total = sched_avg_period() + delta;

if (unlikely(total < avg)) {
/* Ensures that power won't end up being negative */
--
2.0.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/