Re: The performance and behaviour of the anti-fragmentation relatedpatches

From: Joel Schopp
Date: Mon Mar 05 2007 - 11:45:52 EST


If you only need to allocate 1 page size and smaller allocations then no it's not a problem. As soon as you go above that it will be. You don't need to go all the way up to MAX_ORDER size to see an impact, it's just increasingly more severe as you get away from 1 page and towards MAX_ORDER.

We allocate order 1 and 2 pages for stuff without too much problem.

The question I want to know is where do you draw the line as to what is acceptable to allocate in a single contiguous block?

1 page? 8 pages? 256 pages? 4K pages? Obviously 1 page works fine. With 4K page size and 16MB MAX_ORDER 4K pages is theoretically supported, but doesn't work under almost any circumstances (unless you use Mel's patches).

on-demand hugepages could be done better anyway by having the hypervisor
defrag physical memory and provide some way for the guest to ask for a
hugepage, no?

Unless you break the 1:1 virt-phys mapping it doesn't matter if the hypervisor can defrag this for you as the kernel will have the physical address cached away somewhere and expect the data not to move.

I'm a big fan of making this somebody else's problem and the hypervisor would be a good place. I just can't figure out how to actually do it at that layer without changing Linux in a way that is unacceptable to the community at large.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/