Re: [PATCH v3 1/2] xen/balloon: set a mapping for ballooned outpages
From: Stefano Stabellini
Date: Wed Jul 24 2013 - 07:05:32 EST
On Tue, 23 Jul 2013, Konrad Rzeszutek Wilk wrote:
> On Tue, Jul 23, 2013 at 07:00:09PM +0100, Ian Campbell wrote:
> > On Tue, 2013-07-23 at 18:27 +0100, Stefano Stabellini wrote:
> > > +static int __cpuinit balloon_cpu_notify(struct notifier_block *self,
> > > + unsigned long action, void *hcpu)
> > > +{
> > > + int cpu = (long)hcpu;
> > > + switch (action) {
> > > + case CPU_UP_PREPARE:
> > > + if (per_cpu(balloon_scratch_page, cpu) != NULL)
> > > + break;
> >
> > Thinking about this a bit more -- do we know what happens to the per-cpu
> > area for a CPU which is unplugged and then reintroduced? Is it preserved
> > or is it reset?
> >
> > If it is reset then this gets more complicated :-( We might be able to
> > use the core mm page reference count, so that when the last reference is
> > removed the page is automatically reclaimed. We can obviously take a
> > reference whenever we add a mapping of the trade page, but I'm not sure
> > we are always on the path which removes such mappings... Even then you
> > could waste pages for some potentially large amount of time each time
> > you replug a VCPU.
> >
> > Urg, I really hope the per-cpu area is preserved!
>
> It is. During bootup time you see this:
>
> [ 0.000000] smpboot: Allowing 128 CPUs, 96 hotplug CPU
> [ 0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1
>
> which means that all of the per_CPU are shrunk down to 128 (from
> CONFIG_NR_CPUS=512 was built with) and stays for the lifetime of the kernel.
>
> You might have to clear it when the vCPU comes back up though - otherwise you
> will have garbage.
I don't see anything in the hotplug code that would modify the value of
the per_cpu area of offline cpus.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/