Re: [00/41] Large Blocksize Support V7 (adds memmap support)

From: Goswin von Brederlow
Date: Sat Sep 15 2007 - 08:15:30 EST


Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> writes:

> On Tue, 11 Sep 2007 14:12:26 +0200 Jörn Engel <joern@xxxxxxxxx> wrote:
>
>> While I agree with your concern, those numbers are quite silly. The
>> chances of 99.8% of pages being free and the remaining 0.2% being
>> perfectly spread across all 2MB large_pages are lower than those of SHA1
>> creating a collision.
>
> Actually it'd be pretty easy to craft an application which allocates seven
> pages for pagecache, then one for <something>, then seven for pagecache, then
> one for <something>, etc.
>
> I've had test apps which do that sort of thing accidentally. The result
> wasn't pretty.

Except that the applications 7 pages are movable and the <something>
would have to be unmovable. And then they should not share the same
memory region. At least they should never be allowed to interleave in
such a pattern on a larger scale.

The only way a fragmentation catastroph can be (proovable) avoided is
by having so few unmovable objects that size + max waste << ram
size. The smaller the better. Allowing movable and unmovable objects
to mix means that max waste goes way up. In your example waste would
be 7*size. With 2MB uper order limit it would be 511*size.

I keep coming back to the fact that movable objects should be moved
out of the way for unmovable ones. Anything else just allows
fragmentation to build up.

MfG
Goswin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/