Re: Avoiding external fragmentation with a placement policy Version12

From: Steve Lord
Date: Fri Jun 10 2005 - 12:54:57 EST


Christoph Lameter wrote:
On Wed, 8 Jun 2005, Martin J. Bligh wrote:


Right. I agree that large allocs should be reliable. Whether we care so
much about if they're performant or not, I don't know ... is an interesting
question. I think the answer is maybe not, within reason. The cost of
fishing in the allocator might well be irrelevant compared to the cost
of freeing the necessary memory area?


Large consecutive page allocation is important for I/O. Lots of drivers are able to issue transfer requests spanning multiple pages which is only possible if the pages are in sequence. If memory is fragmented then this is no longer possible.

Which I think is one of the reasons Mel set off down this path
in the first place. Scatter gather only gets you so far, and
it makes the DMA engine work harder. We have seen cases where
Windows can get more bandwidth out of fiber channel raids than
can Linux, Windows was using fewer and larger size scsi commands
too. Keep a Linux box busy for a few days and its memory map gets
very fragmented, requests to the scsi layer which could have been
larger tend to get limited by the maximum number of scatter gather
elements a device can handle. Some less powerful raids (Apple Xraids
for example) can become cpu bound when you do this rather than I/O bound.

In this case what tends to help is if processes get given their
address space in large physically contiguous chunks of pages.

Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/