[RFC][Patch 09/18] sched: update division in cpu_avg_load_per_task to use div_u64

From: Nikhil Rao
Date: Wed Apr 20 2011 - 16:56:38 EST


This patch updates the division in cpu_avg_load_per_task() to use div_u64, so
that it works on 32-bit. We do not convert avg_load_per_task to u64 since this
can be atmost 2^28, and fits into unsigned long on 32-bit.

Signed-off-by: Nikhil Rao <ncrao@xxxxxxxxxx>
---
kernel/sched.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index f0adb0e..8047f10 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1525,7 +1525,7 @@ static unsigned long cpu_avg_load_per_task(int cpu)
unsigned long nr_running = ACCESS_ONCE(rq->nr_running);

if (nr_running)
- rq->avg_load_per_task = rq->load.weight / nr_running;
+ rq->avg_load_per_task = div_u64(rq->load.weight, nr_running);
else
rq->avg_load_per_task = 0;

--
1.7.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/