Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

From: Con Kolivas
Date: Sun Mar 11 2007 - 18:12:42 EST


On Monday 12 March 2007 08:52, Con Kolivas wrote:
> And thank you! I think I know what's going on now. I think each rotation is
> followed by another rotation before the higher priority task is getting a
> look in in schedule() to even get quota and add it to the runqueue quota.
> I'll try a simple change to see if that helps. Patch coming up shortly.

Can you try the following patch and see if it helps. There's also one minor
preemption logic fix in there that I'm planning on including. Thanks!

---
kernel/sched.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)

Index: linux-2.6.21-rc3-mm2/kernel/sched.c
===================================================================
--- linux-2.6.21-rc3-mm2.orig/kernel/sched.c 2007-03-12 08:47:43.000000000 +1100
+++ linux-2.6.21-rc3-mm2/kernel/sched.c 2007-03-12 09:10:33.000000000 +1100
@@ -96,10 +96,9 @@ unsigned long long __attribute__((weak))
* provided it is not a realtime comparison.
*/
#define TASK_PREEMPTS_CURR(p, curr) \
- (((p)->prio < (curr)->prio) || (((p)->prio == (curr)->prio) && \
+ (((p)->prio < (curr)->prio) || (!rt_task(p) && \
((p)->static_prio < (curr)->static_prio && \
- ((curr)->static_prio > (curr)->prio)) && \
- !rt_task(p)))
+ ((curr)->static_prio > (curr)->prio))))

/*
* This is the time all tasks within the same priority round robin.
@@ -3323,7 +3322,7 @@ static inline void major_prio_rotation(s
*/
static inline void rotate_runqueue_priority(struct rq *rq)
{
- int new_prio_level, remaining_quota;
+ int new_prio_level;
struct prio_array *array;

/*
@@ -3334,7 +3333,6 @@ static inline void rotate_runqueue_prior
if (unlikely(sched_find_first_bit(rq->dyn_bitmap) < rq->prio_level))
return;

- remaining_quota = rq_quota(rq, rq->prio_level);
array = rq->active;
if (rq->prio_level > MAX_PRIO - 2) {
/* Major rotation required */
@@ -3368,10 +3366,11 @@ static inline void rotate_runqueue_prior
}
rq->prio_level = new_prio_level;
/*
- * While we usually rotate with the rq quota being 0, it is possible
- * to be negative so we subtract any deficit from the new level.
+ * As we are merging to a prio_level that may not have anything in
+ * its quota we add 1 to ensure the tasks get to run in schedule() to
+ * add their quota to it.
*/
- rq_quota(rq, new_prio_level) += remaining_quota;
+ rq_quota(rq, new_prio_level) += 1;
}

static void task_running_tick(struct rq *rq, struct task_struct *p)
@@ -3397,12 +3396,11 @@ static void task_running_tick(struct rq
if (!--p->time_slice)
task_expired_entitlement(rq, p);
/*
- * The rq quota can become negative due to a task being queued in
- * scheduler without any quota left at that priority level. It is
- * cheaper to allow it to run till this scheduler tick and then
- * subtract it from the quota of the merged queues.
+ * We only employ the deadline mechanism if we run over the quota.
+ * It allows aliasing problems around the scheduler_tick to be
+ * less harmful.
*/
- if (!rt_task(p) && --rq_quota(rq, rq->prio_level) <= 0) {
+ if (!rt_task(p) && --rq_quota(rq, rq->prio_level) < 0) {
if (unlikely(p->first_time_slice))
p->first_time_slice = 0;
rotate_runqueue_priority(rq);


--
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/