Re: [RFC][PATCH 1/4] sched: Fix a race between __kthread_bind() and sched_setaffinity()

From: Tejun Heo
Date: Fri Aug 07 2015 - 11:16:18 EST


On Fri, Aug 07, 2015 at 04:27:08PM +0200, Peter Zijlstra wrote:
> Which is the rescue thread attaching itself to a pool that needs help,
> and obviously the rescue thread isn't new so kthread_bind doesn't work
> right.
>
> The best I could come up with is something like the below on top; does
> that work for you? I'll go give it some runtime.
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1622,11 +1622,15 @@ static struct worker *alloc_worker(int n
> * cpu-[un]hotplugs.
> */
> static void worker_attach_to_pool(struct worker *worker,
> - struct worker_pool *pool)
> + struct worker_pool *pool,
> + bool new)
> {
> mutex_lock(&pool->attach_mutex);
>
> - kthread_bind_mask(worker->task, pool->attrs->cpumask);
> + if (new)
> + kthread_bind_mask(worker->task, pool->attrs->cpumask);
> + else
> + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
>
> /*
> * The pool->attach_mutex ensures %POOL_DISASSOCIATED remains
> @@ -1712,7 +1716,7 @@ static struct worker *create_worker(stru
> set_user_nice(worker->task, pool->attrs->nice);
>
> /* successful, attach the worker to the pool */
> - worker_attach_to_pool(worker, pool);
> + worker_attach_to_pool(worker, pool, true);
>
> /* start the newly created worker */
> spin_lock_irq(&pool->lock);
> @@ -2241,7 +2245,7 @@ static int rescuer_thread(void *__rescue
>
> spin_unlock_irq(&wq_mayday_lock);
>
> - worker_attach_to_pool(rescuer, pool);
> + worker_attach_to_pool(rescuer, pool, false);

Hmmm... the race condition didn't exist for workqueue in the first
place, right? As long as the flag is set before the affinity is
configured, there's no race condition. I think the code was better
before. Can't we just revert workqueue.c part?

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/