Require more info on Huge pages

From: Halesh
Date: Tue Mar 17 2009 - 07:45:23 EST



Hi everyone,

While going through Huge pages in Linux, I ran test which is available in
http://lxr.linux.no/linux/Documentation/vm/hugetlbpage.txt. I properly
configured the nr_hugepages and mount points.

If I interrupt the test while running (ctrl+c) and try to restart the test than
mmap will fail saying No memory to map. This means I am not able to mmap Large
pages further.

This happens only when I interrupt the running test and try to rerun it.

I checked with the /proc/meminfo o/p

MemTotal: 3919196 kB
MemFree: 373460 kB
<snip>
HugePages_Total: 1521
HugePages_Free: 1309
HugePages_Rsvd: 1308 **
Hugepagesize: 2048 kB
<snip>

and top command o/p gives like this

<snip>
Mem: 3919196k total, 3546216k used, 372980k free, 14180k buffers
Swap: 610460k total, 0k used, 610460k free, 287656k cached
<snip>

Even process already exited, why does the memory not getting freed??
will large pages are still locked in the memory ? or needs some time or any
method to get broken to small pages and get freed??

There are no other memory eating process running in my system.

Until next reboot the memory is not getting freed.

Please let me know more about this problem.

Thanks,
Halesh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/