Re: [RFC v2 PATCH 4/8] sched: Enforce hard limits by throttling

From: Peter Zijlstra
Date: Tue Oct 13 2009 - 10:28:57 EST


On Wed, 2009-09-30 at 18:22 +0530, Bharata B Rao wrote:

> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 0f1ea4a..77ace43 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1024,7 +1024,7 @@ struct sched_domain;
> struct sched_class {
> const struct sched_class *next;
>
> - void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup);
> + int (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup);
> void (*dequeue_task) (struct rq *rq, struct task_struct *p, int sleep);
> void (*yield_task) (struct rq *rq);
>

I really hate this, it uglfies all the enqueue code in a horrid way
(which is most of this patch).

Why can't we simply enqueue the task on a throttled group just like rt?

> @@ -3414,6 +3443,18 @@ int can_migrate_task(struct task_struct *p, struct rq *rq, int this_cpu,
> }
>
> /*
> + * Don't migrate the task if it belongs to a
> + * - throttled group on its current cpu
> + * - throttled group on this_cpu
> + * - group whose hierarchy is throttled on this_cpu
> + */
> + if (cfs_rq_throttled(cfs_rq_of(&p->se)) ||
> + task_group_throttled(task_group(p), this_cpu)) {
> + schedstat_inc(p, se.nr_failed_migrations_throttled);
> + return 0;
> + }
> +
> + /*
> * Aggressive migration if:
> * 1) task is cache cold, or
> * 2) too many balance attempts have failed.

Simply don't iterate throttled groups?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/