o if by "worse" you mean slower then probably yes, it isn't "slower"
o otoh it is "wrong", see below
o not to mention preempting a thread for another w/ same dynamic priority
is "wrong" -- the task switch overhead isn't justified
o RT tasks have static priorities, and would also be affected
o i don't see how this "simplifies" the code
> > Consider what happens with 2+ equal priority SCHED_FIFO processes...
>
> Only one gets to run until it blocks or calls sched_yield (see the man
> page for sched_setscheduler). What is the problem?
You said it above:
> I guess this is not 100% correct. With my patch the other process
> _may_ get to run, depending on where it is in the runqueue.
now, what happens if there is a SCHED_FIFO thread w/ priority=50 running
and another one becomes runnable?...
> > fixing the scheduler w/o (1) introducing new bugs, and (2) making
> > it slower isn't as simple as it seems at first sight. also there
>
> Sure. That's why I'm aiming for a minimal patch. I'd be happy with
> yours too, though, minus the special case for prev.
what "special case for prev"? ;) You just traded one check for
another...
> > The only issue left is iirc the (external) SCHED_YIELD assumptions]
>
> Can you elaborate on this?
There is code like this (example from __get_free_pages()):
> /*
> * If we can schedule, do so, and make sure to yield.
> * We may be a real-time process, and if kswapd is
> * waiting for us we need to allow it to run a bit.
> */
> if (gfp_mask & __GFP_WAIT) {
> current->policy |= SCHED_YIELD;
> schedule();
> }
this assumes the SCHED_YIELD flag will prevent the current task from
being selected if there's anything else to run. Other than being
"wrong" from a modular pov, it's also wrong because that's not what
SCHED_YIELD actually does. Not even in the stock scheduler...
(this btw means that Richards sched_yield() change is wrong -- normal
threads will _not_ be selected even if a RT task has SCHED_YIELD set)
My plan was to add a schedule_others() call and kill all SCHED_YIELD
use outside of the scheduler, but i haven't gotten around to that yet.
Other than this, the patch is as minimal as it gets -- it simplifies
the code (read: the generated code is better (smaller/faster) than
the original) while fixing many bugs at the same time. (not to mention
all /this-is-subtle/ parts are gone, and the code appears at least
maintainable)
If anybody can see any problems left with the scheduler i'd certainly
like to know...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/