Re: [ACPI] Re: [RFC 5/6]clean cpu state after hotremove CPU

From: Nathan Lynch
Date: Mon Apr 04 2005 - 10:36:13 EST


On Mon, Apr 04, 2005 at 01:42:18PM +0800, Li Shaohua wrote:
> Hi,
> On Mon, 2005-04-04 at 13:28, Nathan Lynch wrote:
> > On Mon, Apr 04, 2005 at 10:07:02AM +0800, Li Shaohua wrote:
> > > Clean up all CPU states including its runqueue and idle thread,
> > > so we can use boot time code without any changes.
> > > Note this makes /sys/devices/system/cpu/cpux/online unworkable.
> >
> > In what sense does it make the online attribute unworkable?
> I removed the idle thread and other CPU states, and makes the dead CPU
> into a 'halt' busy loop.
>
> >
> > > diff -puN kernel/exit.c~cpu_state_clean kernel/exit.c
> > > --- linux-2.6.11/kernel/exit.c~cpu_state_clean 2005-03-31 10:50:27.000000000 +0800
> > > +++ linux-2.6.11-root/kernel/exit.c 2005-03-31 10:50:27.000000000 +0800
> > > @@ -845,6 +845,65 @@ fastcall NORET_TYPE void do_exit(long co
> > > for (;;) ;
> > > }
> > >
> > > +#ifdef CONFIG_STR_SMP
> > > +void do_exit_idle(void)
> > > +{
> > > + struct task_struct *tsk = current;
> > > + int group_dead;
> > > +
> > > + BUG_ON(tsk->pid);
> > > + BUG_ON(tsk->mm);
> > > +
> > > + if (tsk->io_context)
> > > + exit_io_context();
> > > + tsk->flags |= PF_EXITING;
> > > + tsk->it_virt_expires = cputime_zero;
> > > + tsk->it_prof_expires = cputime_zero;
> > > + tsk->it_sched_expires = 0;
> > > +
> > > + acct_update_integrals(tsk);
> > > + update_mem_hiwater(tsk);
> > > + group_dead = atomic_dec_and_test(&tsk->signal->live);
> > > + if (group_dead) {
> > > + del_timer_sync(&tsk->signal->real_timer);
> > > + acct_process(-1);
> > > + }
> > > + exit_mm(tsk);
> > > +
> > > + exit_sem(tsk);
> > > + __exit_files(tsk);
> > > + __exit_fs(tsk);
> > > + exit_namespace(tsk);
> > > + exit_thread();
> > > + exit_keys(tsk);
> > > +
> > > + if (group_dead && tsk->signal->leader)
> > > + disassociate_ctty(1);
> > > +
> > > + module_put(tsk->thread_info->exec_domain->module);
> > > + if (tsk->binfmt)
> > > + module_put(tsk->binfmt->module);
> > > +
> > > + tsk->exit_code = -1;
> > > + tsk->exit_state = EXIT_DEAD;
> > > +
> > > + /* in release_task */
> > > + atomic_dec(&tsk->user->processes);
> > > + write_lock_irq(&tasklist_lock);
> > > + __exit_signal(tsk);
> > > + __exit_sighand(tsk);
> > > + write_unlock_irq(&tasklist_lock);
> > > + release_thread(tsk);
> > > + put_task_struct(tsk);
> > > +
> > > + tsk->flags |= PF_DEAD;
> > > +#ifdef CONFIG_NUMA
> > > + mpol_free(tsk->mempolicy);
> > > + tsk->mempolicy = NULL;
> > > +#endif
> > > +}
> > > +#endif
> >
> > I don't understand why this is needed at all. It looks like a fair
> > amount of code from do_exit is being duplicated here.
> Yes, exactly. Someone who understand do_exit please help clean up the
> code. I'd like to remove the idle thread, since the smpboot code will
> create a new idle thread.

I'd say fix the smpboot code so that it doesn't create new idle tasks
except during boot.

>
> > We've been
> > doing cpu removal on ppc64 logical partitions for a while and never
> > needed to do anything like this.
> Did it remove idle thread? or dead cpu is in a busy loop of idle?

Neither. The cpu is definitely offline, but there is no reason to
free the idle thread.

>
> > Maybe idle_task_exit would suffice?
> idle_task_exit seems just drop mm. We need destroy the idle task for
> physical CPU hotplug, right?

No.

> >
> > I don't understand the need for this, either. The existing cpu
> > hotplug notifier in the scheduler takes care of initializing the sched
> > domains and groups appropriately for online/offline events; why do you
> > need to touch the runqueue structures?
> If a CPU is physically hotremoved from the system, shouldn't we clean
> its runqueue?

No. It should make zero difference to the scheduler whether the "play
dead" cpu hotplug or "physical" hotplug is being used.


Nathan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/