Kanoj, our cpu-pooling + loadbalancing allows you to do that.
The system adminstrator can specify at runtime through a
/proc filesystem interface the cpu-pool-size, whether loadbalacing
should take place.
We can put limiting to the local cpu-set during reschedule_idle
back into the code, to make it complete and compatible with
the approach that Andrea has taken.
This way, one can fully isolate or combine cpu-sets.
here is the code for the pooling.
loadbalancing and /proc system combined in this module.
a writeup explaining this concept is available under
Prerequisite is the MQ scheduler...
We need to update these for 2.4.3 .... (coming)
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Kanoj Sarcar <email@example.com>@lists.sourceforge.net on
04/04/2001 12:50:58 PM
Sent by: firstname.lastname@example.org
To: email@example.com (Andrea Arcangeli)
cc: firstname.lastname@example.org (Ingo Molnar), Hubertus Franke/Watson/IBM@IBMUS,
email@example.com (Mike Kravetz), firstname.lastname@example.org (Fabio
Riccardi), email@example.com (Linux Kernel List),
Subject: Re: [Lse-tech] Re: a quest for a better scheduler
> I didn't seen anything from Kanoj but I did something myself for the
> this is mostly an userspace issue, not really intended as a kernel
> (however it's also partly a kernel optimization). Basically it splits the
> of the numa machine into per-node load, there can be unbalanced load
> nodes but fairness is guaranteed inside each node. It's not extremely
> tested but benchmarks were ok and it is at least certainly stable.
Just a quick comment. Andrea, unless your machine has some hardware
that imply pernode runqueues will help (nodelevel caches etc), I fail
to understand how this is helping you ... here's a simple theory though.
If your system is lightly loaded, your pernode queues are actually
implementing some sort of affinity, making sure processes stick to
cpus on nodes where they have allocated most of their memory on. I am
not sure what the situation will be under huge loads though.
As I have mentioned to some people before, percpu/pernode/percpuset/global
runqueues probably all have their advantages and disadvantages, and their
own sweet spots. Wouldn't it be really neat if a system administrator
or performance expert could pick and choose what scheduler behavior he
wants, based on how the system is going to be used?
Lse-tech mailing list
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Apr 07 2001 - 21:00:14 EST