Re: 2.1.97 mm and cache - basic stupid idea included!

Rik van Riel (H.H.vanRiel@phys.uu.nl)
Tue, 21 Apr 1998 10:46:38 +0200 (MET DST)


On Mon, 20 Apr 1998, Benjamin Redelings I wrote:

> Rik, thanks for your reply and the Documentation that you mentioned :)
> The SC_ROUNDROBIN thing does sound interesting. Um, actually I have an

It's what I used in my mmap-age patch series. It's the
only way that we can implement the correct semantics
for /proc/sys/vm/{pagecache,buffermem}

> Neither /proc/sys/vm nor SC_ROUNDROUBIN can fix the problem that I'm
> talking about, I think. I'm thinking about what will happen if you have

I think it will. Please look up the documentation for
/proc/sys/vm/{pagecache,buffermem}.

> Q1. If there is not "pressure" for memory, should I let the cache grow
> as large as possible?
> A1. Yes, Duh! e.g., If netscape is the only program (besides XFree)
> running, let the cache get HUGE!

A1'. No, it's not very useful when the cache grows above (lets say)
75%. The extra 20% it can grow might be useful for some very special
occasions, but usually it's not that useful.

> Q2. When other programs start running, should I decrease the cache size,
> or swap?
> A2. Decrease the cache size!!!!

...down to a certain percentage, below which we both decrease
cache size and swap. At the moment this is failing, because
kswapd only switches strategy when it fails at one method...

> Q3. Eventually, we should stop decreasing the cache size, and swap a
> little.
> When?
>
> A3.1 Default (not very good): at some specified "minimum percentage"
> A3.2 Above the "minimum percentage" and depend on the system "weather"
> and other conditions, like various other subsystems. Estimate how the
> improvement of each system ("filesystems" and "memory") would change if
> you gave it 4kb.

A3.3 Default: at pagecache.borrowpercent, which is considerably
above minimum.

> filesystems: performance improves if info is already in memory (cache)
> (virtual) memory: performance improves if info is already in memory

The global target of a memory management system is to limit
the number of I/Os that the system needs to do. But we have
to take into account that FS I/O is often more expensive
than swap I/O (need to lookup metadata, data is scattered
all over the disk, etc...).

> (Note: "pressure" is a good analogy. If you want to minimize the
> "energy" of the system, then the "pressure" should be like "dE/dx". In
> my example, dx=4k. Coming up with some common measure of E could be
> hard... but I would just suggest the total number of disk reads
> predicted by the page-aging statistics.... if that would work.)

We could use the 'pressure' model by:
- counting the total number of page faults / megabyte
- steal from a program or the cache when it has less than
the average number of faults/megabyte
- leave a program alone when it has more than the average
number of faults/megabyte

Of course, these numbers need to be averaged over a
x second period, with 1<x<5 (or something like that).

Rik.
+-------------------------------------------+--------------------------+
| Linux: - LinuxHQ MM-patches page | Scouting webmaster |
| - kswapd ask-him & complain-to guy | Vries cubscout leader |
| http://www.phys.uu.nl/~riel/ | <H.H.vanRiel@phys.uu.nl> |
+-------------------------------------------+--------------------------+

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu