Re: [RFC][PATCH v4 3/3] sched: Periodically decay max cost of idlebalance

From: Peter Zijlstra
Date: Mon Sep 09 2013 - 07:45:17 EST


On Tue, Sep 03, 2013 at 11:02:59PM -0700, Jason Low wrote:
> On Fri, 2013-08-30 at 12:29 +0200, Peter Zijlstra wrote:
> > rcu_read_lock();
> > for_each_domain(cpu, sd) {
> > + /*
> > + * Decay the newidle max times here because this is a regular
> > + * visit to all the domains. Decay ~0.5% per second.
> > + */
> > + if (time_after(jiffies, sd->next_decay_max_lb_cost)) {
> > + sd->max_newidle_lb_cost =
> > + (sd->max_newidle_lb_cost * 254) / 256;
>
> I initially picked 0.5%, but after trying it out, it appears to decay very
> slowing when the max is at a high value. Should we increase the decay a
> little bit more? Maybe something like:
>
> sd->max_newidle_lb_cost = (sd->max_newidle_lb_cost * 63) / 64;

So the half-life in either case is is given by:

n = ln(1/2) / ln(x)

which gives 88 seconds for x := 254/256 or 44 seconds for x := 63/64.

I don't really care too much, but obviously something like:

256*exp(ln(.5)/60) ~= 253

Is attractive ;-)

> > + /*
> > + * Stop the load balance at this level. There is another
> > + * CPU in our sched group which is doing load balancing more
> > + * actively.
> > + */
> > + if (!continue_balancing) {
>
> Is "continue_balancing" named "balance" in older kernels?

Yeah, this patch crossed paths with a series remodeling the
load-balancer a bit, that should all be pushed-out to tip/master.

In particular see commit:
23f0d20 sched: Factor out code to should_we_balance()

> Here are the AIM7 results with the other 2 patches + this patch with the
> slightly higher decay value.

Just to clarify, 'this patch' is the one I sent?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/