Re: Hugetlbpages in very large memory machines.......

From: Hirokazu Takahashi
Date: Fri Mar 12 2004 - 23:54:59 EST


My following patch might help you. It inclueds pagefault routine
for hugetlbpages. If you want to use it for your purpose, you need to
remove some code from hugetlb_prefault() that will call hugetlb_fault().

But it's just for IA32.

I heard that n-yoshida@xxxxxxxxxxxxxxx was porting this patch
on IA64.

> We've run into a scaling problem using hugetlbpages in very large memory machines, e. g. machines
> with 1TB or more of main memory. The problem is that hugetlbpage pages are not faulted in, rather
> they are zeroed and mapped in in by hugetlb_prefault() (at least on ia64), which is called in
> response to the user's mmap() request. The net is that all of the hugetlb pages end up being
> allocated and zeroed by a single thread, and if most of the machine's memory is allocated to hugetlb
> pages, and there is 1 TB or more of main memory, zeroing and allocating all of those pages can take
> a long time (500 s or more).
> We've looked at allocating and zeroing hugetlbpages at fault time, which would at least allow
> multiple processors to be thrown at the problem. Question is, has anyone else been working on
> this problem and might they have prototype code they could share with us?
> Thanks,
> --
> Best Regards,
> Ray

Thank you,
Hirokazu Takahashi.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at