Re: [ANNOUNCE][RFC] PlugSched-6.4 for 2.6.18-rc2

From: Al Boldi
Date: Wed Jul 26 2006 - 07:21:10 EST

Peter Williams wrote:
> Al Boldi wrote:
> >>>>>>> It may be really great, to allow schedulers perPid parent, thus
> >>>>>>> allowing the stacking of different scheduler semantics. This
> >>>>>>> could aid flexibility a lot.
> >>>>>>
> >>>>>> I'm don't understand what you mean here. Could you elaborate?
> >>>>>
> >>>>> i.e: Boot the kernel with spa_no_frills, then start X with spa_ws.
> >>>>
> >>>> It's probably not a good idea to have different schedulers managing
> >>>> the same resource. The way to do different scheduling per process is
> >>>> to use the scheduling policy mechanism i.e. SCHED_FIFO, SCHED_RR,
> >>>> etc. (possibly extended) within each scheduler. On the other hand,
> >>>> on an SMP system, having a different scheduler on each run queue (or
> >>>> sub set of queues) might be interesting :-).
> >>>
> >>> What's wrong with multiple run-queues on UP?
> >>
> >> A really high likelihood of starvation of some tasks.
> >
> > Maybe you are thinking of running independent run-queues, in which case
> > it would probably be unwise to run multiple RQs on a single CPU.
> No. I'm thinking about different schedulers on a single run queue. I
> don't think that it's a good idea.

Running different scheds on a single RQ at the same time on the same resource
would be rather odd. That's why independent RQs are necessary even on SMP.
OTOH, running independent RQs on UP doesn't make much sense, unless there is
a way to relate them.

> > But I was more thinking of a run-queue of run-queues, with the masterRQ
> > scheduling slaveRQs, each RQ possibly running its own scheduling
> > semantic.
> I think that you need to think a bit harder about the consequences of
> such a system. The word "chaos" springs to mind.

Are you sure?

MultiDimensional RunQueues spring to mind.



To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at