Re: [PATCH wq/for-4.5-fixes] workqueue: handle NUMA_NO_NODE for unbound pool_workqueue lookup

From: Tejun Heo
Date: Wed Feb 03 2016 - 14:28:18 EST


Hello,

On Wed, Feb 03, 2016 at 08:12:19PM +0100, Thomas Gleixner wrote:
> > Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
> > Reported-by: Mike Galbraith <umgwanakikbuti@xxxxxxxxx>
> > Cc: Tang Chen <tangchen@xxxxxxxxxxxxxx>
> > Cc: Rafael J. Wysocki <rafael@xxxxxxxxxx>
> > Cc: Len Brown <len.brown@xxxxxxxxx>
> > Cc: stable@xxxxxxxxxxxxxxx # v4.3+
>
> 4.3+ ? Hasn't 874bbfe600a6 been backported to older stable kernels?
>
> Adding a 'Fixes: 874bbfe600a6 ...' tag is what you really want here.

Oops, you're right. Will add that once Mike confirms the fix.

> > @@ -570,6 +570,16 @@ static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
> > int node)
> > {
> > assert_rcu_or_wq_mutex_or_pool_mutex(wq);
> > +
> > + /*
> > + * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
> > + * delayed item is pending. The plan is to keep CPU -> NODE
> > + * mapping valid and stable across CPU on/offlines. Once that
> > + * happens, this workaround can be removed.
>
> So what happens if the complete node is offline?

pool_workqueue lookup itself should be fine as dfl_pwq is assigned to
all nodes by default. When the node comes back online, things can
break currently because cpu to node mapping may change. That's what
Tang has been working on. It's a bigger problem throughout the memory
allocation path tho because there's no synchronization around cpu ->
node mapping. Hopefully, the pending patchset can get through sooner
than later.

Thanks.

--
tejun