Huge Pages, more efficient?

From: john moser
Date: Mon May 03 2004 - 01:14:58 EST


Please CC me all replies.

Unless I miss my guess, huge (4MiB) pages and regular (4KiB) pages
can be intermixed. Is there some way that the kernel could use a
heuristic to detect when a range of VM pages is faulted accross
frequently, and can be made into a huge 4MiB page, and then
sacrifice the overhead to shift these pages together physically and
invalidate the 1024 old 4KiB pages in favor of the new 4MiB page?

Doing it this way would make huge pages out of ranges that had
loops accross their bounds and close-together small functions. It
would leave the faulting in overhead of pages that weren't really
useful as huge pages at the normal impact, while relieving the
constant faulting impact around page bound crossing loops and close
together functions.

Another neat trick to try would be to add onto this with a
heuristic which simply looks for faults accross an edge of the
huge pages and determines if these are more common than would-be
faults to the opposite end of the huge page if it were several
4KiB pages. This would allow the huge page to be panned accross
memory to adapt, with only the overhead of moving the adjacent
pages to the huge page.

I might note that the efficiency of this relies a lot on the
assumption that huge pages need to be page-aligned to a normal
page boundary and not a huge page boundary. The second idea
requires this condition to work; the first would work without
it, but would be more efficient if it were true.

Can anyone at least enlighten me as to if huge pages must be
4KiB page or 4MiB page aligned?

_____________________________________________________________
Linux.Net -->Open Source to everyone
Powered by Linare Corporation
http://www.linare.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/