Re: [patch] new scheduler

Ingo Molnar (mingo@chiara.csoma.elte.hu)
Tue, 11 May 1999 14:50:42 +0200 (CEST)


On Tue, 11 May 1999, Andrea Arcangeli wrote:

> >> 1) there are no idle cpu in the system
> >> 2) the prev task was in general the less priority one
> >
> >wrong. We actively _preempt_ processes on other CPUs, this means that a
>
> If I understand well the only wrong thing I said is point (2). But if you
> would read better my email I stated clearly that point (2) it not the pure
> reality, but instead I said that it's a very good approximation of the
> pure reality.
>
> >preempted process has all rights to try to replace even lesser priority
> >processes on other CPUs. Your problem might be that you are thinking in
>
> You should rereadd my previous email. I never said it's useless, I have
> said that it's not worthwile.

it is always 'worthwile' to have a correct scheduler. This was the sole
purpose of all the 2.2.8 scheduler changes!

> You mean that a preemted process has all rights to preempt a even lesser
> priority CPU. But ask you _why_ such process is been preempted. Simply
> because in general we can see it as the _less_ priority one.

not necesserily, it might as well just be replaced by a RT process ...

> I repeat that as global design I prefer to have such call in schedule_tail
> even if according to me it's only a performance _hit_.

it is not a performance hit at all because most processes reschedule
'voluntarily', ie. they get removed from the runqueue.

There is one inconsistency left though, if the previous process was
SCHED_YIELD then we should obviously not push it to other CPUs, because it
has just given up it's timeslice. (the attached untested patch fixes this)

-- mingo

--- linux/kernel/sched.c.orig Tue May 11 13:29:39 1999
+++ linux/kernel/sched.c Tue May 11 13:38:32 1999
@@ -194,10 +194,8 @@
static inline int prev_goodness (struct task_struct * prev,
struct task_struct * p, int this_cpu)
{
- if (p->policy & SCHED_YIELD) {
- p->policy &= ~SCHED_YIELD;
+ if (p->policy & SCHED_YIELD)
return 0;
- }
return goodness(prev, p, this_cpu);
}

@@ -659,10 +657,16 @@
*/
static inline void __schedule_tail (struct task_struct *prev)
{
+ if (prev->policy & SCHED_YIELD)
+ prev->policy &= ~SCHED_YIELD;
+ else {
+#ifdef __SMP__
+ if ((prev->state == TASK_RUNNING) &&
+ (prev != idle_task(smp_processor_id())))
+ reschedule_idle(prev);
+#endif
+ }
#ifdef __SMP__
- if ((prev->state == TASK_RUNNING) &&
- (prev != idle_task(smp_processor_id())))
- reschedule_idle(prev);
wmb();
prev->has_cpu = 0;
#endif /* __SMP__ */

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/