freepages.min < 62 = problem?

Mike Galbraith (mikeg@weiden.de)
Sun, 21 Feb 1999 15:03:16 +0100 (CET)


Hi Folks,

Short version:
In playing with Bill Hawes' defragmentation patch, I found that the
fragmentation problem doesn't seem to exist (at least as far as non
dma memory goes). There does seem to be a problem with the default
tuning parameters for low memory machines. If freepages.min < 62,
you are guaranteed to have some order size not available because the
kernel lets free memory fall below a size where each order size can
be available. The kernel sets 32 64 96 for a 16M machine. It seems
to me that 62 should be the absolute lower limit. (?)

Some details:
I modified the defrag patch to be proc tunable such that I can set the
maximum order that it tries to defrag, and set availability goals for
each order size. Setting the max order to 5 and a goal of 1 available
chunk of each order other than 0, I was seeing tons of activity. Set
freepages.min to 62, and I see 0 activity.. there is virtually always
a chunk of each order available. In practice, the system seems to do
a great job of defragmenting itself.

As I'm writing this, I'm running a make -j3 bzImage, find / and ping
-f nutherbox on a system with 16M of ram enabled (very fat config)..
the defragmenter is totally silent. If I load the sound system and
add some mpg123 to the mix, there is a brief flurry of activity as the
defragmenter replaces the missing 64k dmabuf chunk.. then silence.
(Maybe more likely luck filling the hole.. defragger only does it if it
finds a chunk complete except for exactly one missing page. Interesting
thing is it _does_ get filled even on a very busy low memory box)

Testing done on 2.2.2-pre5+bunchapatches and 2.2.1-stock+defrag.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/