Re: [PATCH 4/5] workqueue: update NUMA affinity for the node lost CPU

From: Tejun Heo
Date: Fri Dec 12 2014 - 12:27:47 EST


On Fri, Dec 12, 2014 at 06:19:54PM +0800, Lai Jiangshan wrote:
> We fixed the major cases when the numa mapping is changed.
>
> We still have the assumption that when the node<->cpu mapping is changed
> the original node is offline, and the current code of memory-hutplug also
> prove this.
>
> This assumption might be changed in future and the orig_node is still online
> in some cases. And in these cases, the cpumask of the pwqs of the orig_node
> still contains the onlining CPU which is a CPU of another node, and the worker
> may run on the onlining CPU (aka run on the wrong node).
>
> So we drop this assumption and make the code calls wq_update_unbound_numa()
> to update the affinity in this case.

This is seriously obfuscating. I really don't think meddling with
existing pools is a good idea. The foundation those pools were
standing are gone. Drain and discard the pools. Please don't try to
retro-fit it to new foundations.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/