Hugetlbpages in very large memory machines.......

From: Ray Bryant
Date: Fri Mar 12 2004 - 22:42:41 EST


We've run into a scaling problem using hugetlbpages in very large memory machines, e. g. machines with 1TB or more of main memory. The problem is that hugetlbpage pages are not faulted in, rather they are zeroed and mapped in in by hugetlb_prefault() (at least on ia64), which is called in response to the user's mmap() request. The net is that all of the hugetlb pages end up being allocated and zeroed by a single thread, and if most of the machine's memory is allocated to hugetlb pages, and there is 1 TB or more of main memory, zeroing and allocating all of those pages can take a long time (500 s or more).

We've looked at allocating and zeroing hugetlbpages at fault time, which would at least allow multiple processors to be thrown at the problem. Question is, has anyone else been working on
this problem and might they have prototype code they could share with us?

Thanks,
--
Best Regards,
Ray
-----------------------------------------------
Ray Bryant
512-453-9679 (work) 512-507-7807 (cell)
raybry@xxxxxxx raybry@xxxxxxxxxxxxx
The box said: "Requires Windows 98 or better",
so I installed Linux.
-----------------------------------------------

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/