Re: [00/41] Large Blocksize Support V7 (adds memmap support)

From: Andrea Arcangeli
Date: Wed Sep 12 2007 - 04:20:36 EST


On Tue, Sep 11, 2007 at 05:04:41PM -0700, Christoph Lameter wrote:
> I would think that your approach would be slower since you always have to
> populate 1 << N ptes when mmapping a file? Plus there is a lot of wastage

I don't have to populate them, I could just map one at time. The only
reason I want to populate every possible pte that could map that page
(by checking vma ranges) is to _improve_ performance by decreasing the
number of page faults of an order of magnitude. Then with the 62th bit
after NX giving me a 64k tlb, I could decrease the frequency of the
tlb misses too.

> of memory because even a file with one character needs an order N page? So
> there are less pages available for the same workload.

This is a known issue. The same is true for ppc64 64k. If that really
is an issue, that may need some generic solution with tail packing.

> Then you are breaking mmap assumptions of applications becaused the order
> N kernel will no longer be able to map 4k pages. You likely need a new
> binary format that has pages correctly aligned. I know that we would need
> one on IA64 if we go beyond the established page sizes.

No you misunderstood the whole design. My patch will be 100% backwards
compatible in all respects. If I could break backwards compatibility
70% of the complexity would go away...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/