Re: Swapping in 2.6.10 and 2.6.11.11 on a desktop system

From: Alexander Gretencord
Date: Mon Jun 20 2005 - 14:20:59 EST


On Thursday 16 June 2005 06:16, you wrote:
> echo 100 > /proc/sys/vm/mapped

Doesn't work either.

What I do to test this is the following:

Start X with KDE, start a Konqueror instance, a firefox, eclipse and vmware.
This increases memory load to about 250MB. Then I start os inside the vmware
virtual machine. This increases memory allocation because the guest operating
system has to exist somewhere and generates IOs while the virtual disk is
accessed. With mapped=100 it takes longer until the system begins to swap but
once it has begun swapping I get this:

total used free shared buffers cached
Mem: 503 498 5 0 10 412
-/+ buffers/cache: 75 428
Swap: 494 230 264

It does not matter which patch I am using, once the magical "begin
swapping"-mark is reached, real ram usage drops to a bare minimum and cache
usage goes up. The IO cache is probably not even used as there are virtually
no reusable parts.

The problem is the kernel thinking in the wrong direction (freeing ram for
disk cache). Maybe that's ok for a streaming server which uses 10MB of RAM
for code and internal data and is constantly accessing the same 400MB of
streaming video data. Or a webserver serving a big amount of static pages
etc. but it is not ok for my desktop workload where I have about 400MB of
data belonging to applications and IOs are not repeating very often (disk
cache efficiency is probably very low).

Any idea why the kernel behaves like this and how I can get expected
behaviour? Expected behaviour for a desktop would be:

Use as much disk cache as there is free RAM. Once applications start using all
the ram, only keep a certain percentage/bare minimum of disk cache. What is
absolutely undesirable is a growing and growing disk cache when there is not
enough ram to keep applications and/or their data in ram.

> If this tries so hard to avoid swap that you get an out-of-memory condition
> you may also have to disable the hard maplimit with this:
> echo 0 > /proc/sys/vm/hardmaplimit

Yes I get oom conditions with mapped=100 but they are not my real problem, the
problem is the disk cache usage pattern. I it would help, I still have some
dmesg output from the oom killer.

Any ideas? Am I just doing something wrong? I don't use any
special /proc-settings other than the swappiness/mapped test values.


Alex
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/