[PATCH v2] Make sure timers have migrated before killingmigration_thread

From: Amit K. Arora
Date: Wed May 19 2010 - 08:14:47 EST


On Wed, May 19, 2010 at 11:31:55AM +0200, Peter Zijlstra wrote:
> On Wed, 2010-05-19 at 14:35 +0530, Amit K. Arora wrote:

Hi Peter,

Thanks for the review!

> > diff -Nuarp linux-2.6.34.org/kernel/sched.c linux-2.6.34/kernel/sched.c
> > --- linux-2.6.34.org/kernel/sched.c 2010-05-18 22:56:21.000000000 -0700
> > +++ linux-2.6.34/kernel/sched.c 2010-05-18 22:58:31.000000000 -0700
> > @@ -5942,14 +5942,26 @@ migration_call(struct notifier_block *nf
> > cpu_rq(cpu)->migration_thread = NULL;
> > break;
> >
> > + case CPU_POST_DEAD:
> > + /*
> > + Bring the migration thread down in CPU_POST_DEAD event,
> > + since the timers should have got migrated by now and thus
> > + we should not see a deadlock between trying to kill the
> > + migration thread and the sched_rt_period_timer.
> > + */
>
> Faulty comment style that, please use:
>
> /*
> * text
> * goes
> * here
> */

Sure.

> > + cpuset_lock();
> > + rq = cpu_rq(cpu);
> > + kthread_stop(rq->migration_thread);
> > + put_task_struct(rq->migration_thread);
> > + rq->migration_thread = NULL;
> > + cpuset_unlock();
> > + break;
> > +
>
> The other problem is more urgent though, CPU_POST_DEAD runs outside of
> the hotplug lock and thus the above becomes a race where we could
> possible kill off the migration thread of a newly brought up cpu:
>
> cpu0 - down 2
> cpu1 - up 2 (allocs a new migration thread, and leaks the old one)
> cpu0 - post_down 2 - frees the migration thread -- oops!

Ok. So, how about adding a check in CPU_UP_PREPARE event handling too ?
The cpuset_lock will synchronize, and thus avoid race between killing of
migration_thread in up_prepare and post_dead events.

Here is the updated patch. If you don't like this one too, do you mind
suggesting an alternate approach to tackle the problem ? Thanks !

--
Regards,
Amit Arora


Signed-off-by: Amit Arora <aarora@xxxxxxxxxx>
Signed-off-by: Gautham R Shenoy <ego@xxxxxxxxxx>
--
diff -Nuarp linux-2.6.34.org/kernel/sched.c linux-2.6.34/kernel/sched.c
--- linux-2.6.34.org/kernel/sched.c 2010-05-18 22:56:21.000000000 -0700
+++ linux-2.6.34/kernel/sched.c 2010-05-19 04:47:49.000000000 -0700
@@ -5900,6 +5900,19 @@ migration_call(struct notifier_block *nf

case CPU_UP_PREPARE:
case CPU_UP_PREPARE_FROZEN:
+ cpuset_lock();
+ rq = cpu_rq(cpu);
+ /*
+ * Since we now kill migration_thread in CPU_POST_DEAD event,
+ * there may be a race here. So, lets cleanup the old
+ * migration_thread on the rq, if any.
+ */
+ if (unlikely(rq->migration_thread)) {
+ kthread_stop(rq->migration_thread);
+ put_task_struct(rq->migration_thread);
+ rq->migration_thread = NULL;
+ }
+ cpuset_unlock();
p = kthread_create(migration_thread, hcpu, "migration/%d", cpu);
if (IS_ERR(p))
return NOTIFY_BAD;
@@ -5942,14 +5955,34 @@ migration_call(struct notifier_block *nf
cpu_rq(cpu)->migration_thread = NULL;
break;

+ case CPU_POST_DEAD:
+ /*
+ * Bring the migration thread down in CPU_POST_DEAD event,
+ * since the timers should have got migrated by now and thus
+ * we should not see a deadlock between trying to kill the
+ * migration thread and the sched_rt_period_timer.
+ */
+ cpuset_lock();
+ rq = cpu_rq(cpu);
+ if (likely(rq->migration_thread)) {
+ /*
+ * Its possible that this CPU was onlined (from a
+ * different CPU) before we reached here and
+ * migration_thread was cleaned-up in the
+ * CPU_UP_PREPARE event handling.
+ */
+ kthread_stop(rq->migration_thread);
+ put_task_struct(rq->migration_thread);
+ rq->migration_thread = NULL;
+ }
+ cpuset_unlock();
+ break;
+
case CPU_DEAD:
case CPU_DEAD_FROZEN:
cpuset_lock(); /* around calls to cpuset_cpus_allowed_lock() */
migrate_live_tasks(cpu);
rq = cpu_rq(cpu);
- kthread_stop(rq->migration_thread);
- put_task_struct(rq->migration_thread);
- rq->migration_thread = NULL;
/* Idle task back to normal (off runqueue, low prio) */
raw_spin_lock_irq(&rq->lock);
update_rq_clock(rq);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/