Re: [patch 3/6] sched, nohz: sched group, domain aware nohz idleload balancing

From: Suresh Siddha
Date: Mon Nov 28 2011 - 18:54:14 EST


On Thu, 2011-11-24 at 03:53 -0800, Peter Zijlstra wrote:
> On Fri, 2011-11-18 at 15:03 -0800, Suresh Siddha wrote:
> > Make nohz idle load balancing more scalabale by using the nr_busy_cpus in
> > the struct sched_group_power.
> >
> > Idle load balance is kicked on one of the idle cpu's when there is atleast
> > one idle cpu and
> >
> > - a busy rq having more than one task or
> >
> > - a busy scheduler group having multiple busy cpus that exceed the sched group
> > power or
> >
> > - for the SD_ASYM_PACKING domain, if the lower numbered cpu's in that
> > domain are idle compared to the busy ones.
> >
> > This will help in kicking the idle load balancing request only when
> > there is a real imbalance. And once it is mostly balanced, these kicks will
> > be minimized.
> >
> > These changes helped improve the workload that is context switch intensive
> > between number of task pairs by 2x on a 8 socket NHM-EX based system.
>
> OK, but the nohz idle balance will still iterate the whole machine
> instead of smaller parts, right?

In the current series, yes. one idle cpu spending a bit more time doing
idle load balancing might be better compared to waking up multiple idle
cpu's from deep c-states.

But if needed, we can easily partition the nohz idle load balancer load
to multiple idle cpu's. But we need a balance between the right
partition size vs how many idle cpu's we need to bring out of tickless
mode to do this idle load balancing.

Current proposed series already has the infrastructure to identify
which scheduler domain has the imbalance. Perhaps we can use that to do
the nohz idle load balancing only for that domain.

For now, I am trying to do better than what mainline has.

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/