Re: [PATCH 5/8] sched: Favour moving tasks towards the preferred node

From: Peter Zijlstra
Date: Fri Jun 28 2013 - 05:05:07 EST


On Fri, Jun 28, 2013 at 01:41:20PM +0530, Srikar Dronamraju wrote:

Please trim your replies.

> > +/* Returns true if the destination node has incurred more faults */
> > +static bool migrate_improves_locality(struct task_struct *p, struct lb_env *env)
> > +{
> > + int src_nid, dst_nid;
> > +
> > + if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
> > + return false;
> > +
> > + src_nid = cpu_to_node(env->src_cpu);
> > + dst_nid = cpu_to_node(env->dst_cpu);
> > +
> > + if (src_nid == dst_nid)
> > + return false;
> > +
> > + if (p->numa_migrate_seq < sysctl_numa_balancing_settle_count &&
>
> Lets say even if the numa_migrate_seq is greater than settle_count but running
> on a wrong node, then shouldnt this be taken as a good opportunity to
> move the task?

I think that's what its doing; so this stmt says; if seq is large and
we're trying to move to the 'right' node; move it noaw.

> > + p->numa_preferred_nid == dst_nid)
> > + return true;
> > +
> > + return false;
> > +}
> > +
> > +
> > /*
> > * can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
> > */
> > @@ -3945,10 +3977,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
> >
> > /*
> > * Aggressive migration if:
> > - * 1) task is cache cold, or
> > - * 2) too many balance attempts have failed.
> > + * 1) destination numa is preferred
> > + * 2) task is cache cold, or
> > + * 3) too many balance attempts have failed.
> > */
> >
> > + if (migrate_improves_locality(p, env))
> > + return 1;
>
> Shouldnt this be under tsk_cache_hot check?
>
> If the task is cache hot, then we would have to update the corresponding schedstat
> metrics.

No; you want migrate_degrades_locality() to be like task_hot(). You want
to _always_ migrate tasks towards better locality irrespective of local
cache hotness.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/