Re: [PATCH 3/7] numa,sched: build per numa_group active node maskfrom faults_from statistics

From: Peter Zijlstra
Date: Mon Jan 20 2014 - 11:31:32 EST


On Fri, Jan 17, 2014 at 04:12:05PM -0500, riel@xxxxxxxxxx wrote:
> /*
> + * Iterate over the nodes from which NUMA hinting faults were triggered, in
> + * other words where the CPUs that incurred NUMA hinting faults are. The
> + * bitmask is used to limit NUMA page migrations, and spread out memory
> + * between the actively used nodes. To prevent flip-flopping, and excessive
> + * page migrations, nodes are added when they cause over 40% of the maximum
> + * number of faults, but only removed when they drop below 20%.
> + */
> +static void update_numa_active_node_mask(struct task_struct *p)
> +{
> + unsigned long faults, max_faults = 0;
> + struct numa_group *numa_group = p->numa_group;
> + int nid;
> +
> + for_each_online_node(nid) {
> + faults = numa_group->faults_from[task_faults_idx(nid, 0)] +
> + numa_group->faults_from[task_faults_idx(nid, 1)];
> + if (faults > max_faults)
> + max_faults = faults;
> + }
> +
> + for_each_online_node(nid) {
> + faults = numa_group->faults_from[task_faults_idx(nid, 0)] +
> + numa_group->faults_from[task_faults_idx(nid, 1)];
> + if (!node_isset(nid, numa_group->active_nodes)) {
> + if (faults > max_faults * 4 / 10)
> + node_set(nid, numa_group->active_nodes);
> + } else if (faults < max_faults * 2 / 10)
> + node_clear(nid, numa_group->active_nodes);
> + }
> +}

Why not use 6/16 and 3/16 resp.? That avoids an actual division.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/