Re: Forget fragmentation (was Re: Linux hostile to poverty)

Bill Metzenthen (melbpc@melbpc.org.au)
Sun, 19 Jul 1998 18:56:19 +1000 (EST)


Linus wrote:

> Now, I suspect that those defaults are too big for a 8MB system. They
> essentially mean that we always try to keep half a megabyte of memory
> free, which is a pretty big chunk once the kernel and the reserved pages
> have been taken out of the system.
>
> People that have 8MB machines, could you please check out what happens
> when you do a
>
> echo "20 40 60" > /proc/sys/vm/freepages

The effect of such a setting upon the "rusting" effect is given at the
end of this message. It does indeed reduce the magnitude of the
effect, but doesn't remove it.

To put things into perspective, 2.1.109 might be acceptable to most
users with low memory machines because they mightn't trigger the
rusting effect before their next reboot. However, when the effect is
triggered then the machine must be rebooted to restore performance.
It would be preferable if there was some other means -- preferably
automatic -- of restoring performance.

Someone asked about the performanace of earlier kernels in the 2.1.xx
series. I happen to have a few of these kernels on my machine so I
have also added results for the two of these which are the closest to
the introduction of the rusting effect.

Bill Metzenthen

--------------------------------------------------------------------------

Rusting Effect Update.
------- ------ -------

[For those new to the effect, somewhere in the 2.1.xx kernel series
my low memory machine (8Mbyte) began to get sluggish after being
used for some time due to lots of swapping. This is reliably triggered
by doing a 'find' on a directory which has lots of files (a few tens
of thousands) in sub-directories, etc. My test consists of compiling
one of the kernel files before and after doing such a find (fresh after
re-booting.]

The results so far:
kernel approx rusting effect
2.0.33 38 --> 30 (improves!)
2.1.96 37 --> 263
2.1.98 43 --> 284
2.1.99 36 --> 100
2.1.100 38 --> 121
2.1.101 47 --> 152
2.1.102 48 --> 150
2.1.103 53 --> 168
2.1.106 57 --> 209
2.1.108 54 --> 200
2.1.109 51 --> 273
2.0.33 42 --> 34
2.1.109 46 --> 153 (1: 20 40 60 > freepages)
--> 108 (1: 2 min)
--> 75 (1: 15 min)
--> 80 (1: 60 min)
2.1.109 40 --> 117 (2: 20 25 30 > freepages)
--> 90 (2: 2 min)
2.1.40 46 --> 42
2.1.64 56 --> 600

Notes: (1) mm parameters set with: echo "20 40 60" > /proc/sys/vm/freepages
and then test repeated after the shown delays without rebooting
between.
(2) mm parameters set with: echo "20 25 30" > /proc/sys/vm/freepages

Don't read too much into the precise figures. These results were
obtained over a period of months, during which various things have
changed (such as libraries, probably the compiler, etc). The first
2.0.33 result was obtained on 25 April, the last on 18th July.

All of the later kernels show some (but not full) recovery over time.

-- 
-----------------------------------------------------------------------------
Bill Metzenthen        | See http://www.suburbia.net/~billm/ for information
billm@melbpc.org.au    | on an 80x87 FPU emulator, using floating point
billm@suburbia.net     | (particularly on Linux), and code for manipulating
Melbourne, Australia   | the floating point environment on 80x86 Linux.
-----------------------------------------------------------------------------

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html