[PATCH 4/5] workqueue: update NUMA affinity for the node lost CPU

From: Lai Jiangshan
Date: Fri Dec 12 2014 - 05:16:32 EST


We fixed the major cases when the numa mapping is changed.

We still have the assumption that when the node<->cpu mapping is changed
the original node is offline, and the current code of memory-hutplug also
prove this.

This assumption might be changed in future and the orig_node is still online
in some cases. And in these cases, the cpumask of the pwqs of the orig_node
still contains the onlining CPU which is a CPU of another node, and the worker
may run on the onlining CPU (aka run on the wrong node).

So we drop this assumption and make the code calls wq_update_unbound_numa()
to update the affinity in this case.

Cc: Tejun Heo <tj@xxxxxxxxxx>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@xxxxxxxxxxxxxx>
Cc: "Gu, Zheng" <guz.fnst@xxxxxxxxxxxxxx>
Cc: tangchen <tangchen@xxxxxxxxxxxxxx>
Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@xxxxxxxxxxxxxx>
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
---
kernel/workqueue.c | 15 +++++++++++++++
1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7fbabf6..29a96c3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4007,6 +4007,21 @@ static void wq_update_numa_mapping(int cpu)
if (pool->node != node)
pool->node = node;
}
+
+ /* Test whether we hit the case where orig_node is still online */
+ if (orig_node != NUMA_NO_NODE &&
+ !cpumask_empty(cpumask_of_node(orig_node))) {
+ struct workqueue_struct *wq;
+ cpu = cpumask_any(cpumask_of_node(orig_node));
+
+ /*
+ * the pwqs of the orig_node are still allowed on the onlining
+ * CPU but which is belong to new_node, update NUMA affinity
+ * for orig_node.
+ */
+ list_for_each_entry(wq, &workqueues, list)
+ wq_update_unbound_numa(wq, cpu, true);
+ }
}

static int alloc_and_link_pwqs(struct workqueue_struct *wq)
--
1.7.4.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/