Re: [PATCH RFC] hotplug-memory: refactor online_pages to separatezone growth from page onlining

From: Dave Hansen
Date: Wed Apr 02 2008 - 19:28:19 EST


On Wed, 2008-04-02 at 15:13 -0700, Jeremy Fitzhardinge wrote:
> Dave Hansen wrote:
> > Yeah, but I'm just talking about hotplugged memory. When we add it, we
> > don't have to map the added pages (since they're highmem) and don't have
> > to touch their contents and zero them out, either. Then, the balloon
> > driver can notice that the memory is too large, and start to balloon it
> > down.
>
> I didn't think x86-64 had a notion of highmem.

It doesn't.

> How do you prevent the pages from being used before they're ballooned out?

I think there are a few options here. One is to check on the way out of
the allocator that we're not over some Xen-specific limit. Basically
that we aren't about to touch a hardware page for which the hypervisor
hasn't allocating backing memory.

Another is to give pages sitting in the allocator some kind of
associated state or keep them on separate lists. (I think this has
something in common with those s390 CMM patches). When you want to
allocate a page, you not only pull it off the buddy lists, but you also
have to check with the hypervisor to make sure it has backing store
before you actually return it. You make it non-volatile in CMM-speak (I
think).

If you can't allocate backing store for a page, you toss it over to the
balloon driver (who's whole job is to keep track of pages without
hypervisor backing anyway) and go back to the allocator for another
one.

> >> Everything also applies to x86-64.
> >
> > Not really, though. We don't have the page->flags shortage or lack of
> > vmemmap on x86_64.
>
> Right now, I'd rather have a single mechanism that works for both.

Yeah, that would be most ideal. But, at the same time, you don't want
to hobble your rockstar x86_64 implementation with quirks inherited from
the crufy 32-bit junk. :)

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/