Re: [PATCH 3/3] sched, fair: Stop searching for tasks in newidle balance if there are runnable tasks

From: Mike Galbraith
Date: Thu Apr 24 2014 - 15:07:15 EST


On Thu, 2014-04-24 at 09:37 -0700, Jason Low wrote:
> On Thu, 2014-04-24 at 04:51 +0200, Mike Galbraith wrote:
> > On Wed, 2014-04-23 at 18:30 -0700, Jason Low wrote:
> > > It was found that when running some workloads (such as AIM7) on large systems
> > > with many cores, CPUs do not remain idle for long. Thus, tasks can
> > > wake/get enqueued while doing idle balancing.
> > >
> > > In this patch, while traversing the domains in idle balance, in addition to
> > > checking for pulled_task, we add an extra check for this_rq->nr_running for
> > > determining if we should stop searching for tasks to pull. If there are
> > > runnable tasks on this rq, then we will stop traversing the domains. This
> > > reduces the chance that idle balance delays a task from running.
> > >
> > > This patch resulted in approximately a 6% performance improvement when
> > > running a Java Server workload on an 8 socket machine.
> >
> > Checking rq->lock for contention before ever going to idle balancing as
> > well should give you a bit more. No need to run around looking for work
> > that's trying to arrive. By not going there, perhaps stacking tasks,
> > you may head off a future bounce as well.
>
> Are you thinking of something along the lines of this:
>
> @@ -6658,7 +6658,8 @@ static int idle_balance(struct rq *this_rq)
> */
> this_rq->idle_stamp = rq_clock(this_rq);
>
> - if (this_rq->avg_idle < sysctl_sched_migration_cost)
> + if (this_rq->avg_idle < sysctl_sched_migration_cost ||
> + spin_is_contended(&this_rq->lock))
> goto out;
>

More or less, yes, that's what I was thinking, because the wakeup you
are watching for, and encountering in reality could very well be that
very one. But as noted, my reaction to that wakeup couldn't possibly
have been further off the mark.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/