Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19

From: Ingo Molnar
Date: Fri Nov 04 2005 - 02:37:54 EST



* Paul Jackson <pj@xxxxxxx> wrote:

> At first glance, this is the sticky point that jumps out at me.
>
> Andy wrote:
> > My experience is that after some days or weeks of running have gone
> > by, there is no possible way short of a reboot to get pages merged
> > effectively back to any pristine state with the infrastructure that
> > exists there.
>
> I take it, from what Andy writes, and from my other experience with
> similar customers, that his workload is not "well-behaved" in the
> sense you hoped for.
>
> After several diverse jobs are run, we cannot, so far as I know, merge
> small pages back to big pages.

ok, so the zone solution it has to be. I.e. the moment it's a separate
special zone, you can boot with most of the RAM being in that zone, and
you are all set. It can be used both for hugetlb allocations, and for
other PAGE_SIZE allocations as well, in a highmem-fashion. These HPC
setups are rarely kernel-intense.

Thus the only dynamic sizing decision that has to be taken is to
determine the amount of 'generic kernel RAM' that is needed in the
worst-case. To give an example: say on a 256 GB box, set aside 8 GB for
generic kernel needs, and have 248 GB in the hugemem zone. This leaves
us with the following scenario: apps can use up to 97% of all RAM for
hugemem, and they can use up to 100% of all RAM for PAGE_SIZE
allocations. 3% of RAM can be used by generic kernel needs. Sounds
pretty reasonable and straightforward from a system management point of
view. No runtime resizing, but it wouldnt be needed, unless kernel
activity needs more than 8GB of RAM.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/