[RFC, PATCH 0/9] CPU Controller V2

From: Srivatsa Vaddagiri
Date: Thu Sep 28 2006 - 13:26:18 EST


Here's V2 of the token-based CPU controller I have been working on.

Changes since last version (posted at http://lkml.org/lkml/2006/8/20/115):

- Task load was not changed when it moved between task-groups of
different quota (bug hit by Mike Galbraith).

- SMP load balance seems to work -much- better now wrt its awaress
of quota on each task-group. The trick was to go beyond the
max_load difference in __move_tasks and instead use the load
difference between two task-groups on the different cpus as
basis of pulling tasks.

- Better timeslice management, aimed at handling bursty
workloads better. Patch 3/9 has documentation on timeslice
management for various task-groups.

- Modified cpuset interface as per Paul Jackson's suggestions.
Some of the changes are:
- s/meter_cpu/cpu_meter_enabled
- s/cpu_quota/cpu_meter_quota
- s/FILE_METER_FLAG/FILE_CPU_METER_ENABLED
- s/FILE_METER_QUOTA/FILE_CPU_METER_QUOTA
- Dont allow cpu_meter_enabled to be turned on for an
"in-use" cpuset (which has tasks attached to it)
- Dont allow cpu_meter_quota to be changed for an
"in-use" cpuset (which has tasks attached to it)

Last two are temporary limitations until we figure out how
to get to a cpuset's task-list more easily.

Still on my todo list:

- Improved surplus cycles management. If A, B and C groups have
been given 50%, 30% and 20% quota respectively and if group B
is idle, B's quota has to be divided b/n A and C in the 5:2
proportion.

- Although load balance seems to be working nicely for the
testcases I have been running, I anticipate certain corner
cases which are yet to be worked out. Especially I need to
make sure some of the HT/MC optimizations are not broken.


Ingo/Nick, IMHO virtualizing cpu-runqueues approach to solve the controller
need is not a good idea, since:

- retaining existing load-balance optimizations for MC/SMT case is
going to be hard (has to be done at schedule time now)
- because of virtualization, two virtual cpus could end up running on
the same physical cpu which would affect the carefull SMP
optimizations put in place are all-over the kernel
- not to mention specialized apps which want to bind to CPUs for
performance reasons may behave badly in such a virtualized
environment.

Hence I have been pursuing more simpler approaches like in this patch.

Your comments/views on this are highly appreciated.

--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/