Re: smp: Start up non-boot CPUs asynchronously

From: Linus Torvalds
Date: Wed Feb 01 2012 - 18:31:35 EST


On Wed, Feb 1, 2012 at 3:09 PM, Arjan van de Ven
<arjanvandeven@xxxxxxxxx> wrote:
>
>
> we spend slightly more than 10 milliseconds on doing the hardware level
> "send ipi, wait for the cpu to get power"  dance. This is mostly just
> hardware physics.
> we spend a bunch of time calibrating loops-per-jiffie/tsc (in 3.3-rc this is
> only done once per socket, but each time we do it, it's several dozen
> milliseconds)
> we spend 20 milliseconds on making sure the tsc is not out of sync with the
> rest of the system (we're looking at optimizing this right now)
>
> a 3.2 kernel spent on average 120 milliseconds per logical non-boot cpu on
> my laptop. 3.3-rc is better (the calibration is now cached for each physical
> cpu), but still dire

Could we drop the cpu hotplug lock earlier?

In particular, maybe we could split it up, and make it something like
the following:

- keep the existing cpu_hotplug.lock with largely the same semantics

- add a new *per-cpu* hotplug lock that gets taken fairly early when
the CPU comes up (before calibration), and then we can drop the global
lock. We just need to make sure that the CPU has been added to the
list of CPU's, we don't need for the CPU to have fully initialized
itself.

- on cpu unplug, we first take the global lock, and then after that
we need to take the local lock of the CPU being brought down - so that
a "down" event cannot succeed before the previous "up" is complete.

Wouldn't something like that largely solve the problem? Sure, maybe
some of the current get_online_cpus() users would need to be taught to
wait for the percpu lock (or completion - maybe that would be better),
but most of them don't really care. They tend to want to just do
something fairly simple with a stable list of CPU's.

I dunno. Maybe it would be more painful than I think it would.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/