Re: 2.1.97 mm and cache - basic stupid idea included!

Oliver Neukum (neukum@fachschaft.org.chemie.uni-muenchen.de)
Wed, 22 Apr 1998 13:54:19 +0200


<snip>

>> Q2. When other programs start running, should I decrease the cache
size,
>> or swap?
>> A2. Decrease the cache size!!!!
>
>...down to a certain percentage, below which we both decrease
>cache size and swap. At the moment this is failing, because
>kswapd only switches strategy when it fails at one method...
>
>> Q3. Eventually, we should stop decreasing the cache size, and swap a

>> little.
>> When?
>>
>> A3.1 Default (not very good): at some specified "minimum percentage"

>> A3.2 Above the "minimum percentage" and depend on the system
"weather"
>> and other conditions, like various other subsystems. Estimate how
the
>> improvement of each system ("filesystems" and "memory") would change
if
>> you gave it 4kb.
>
>A3.3 Default: at pagecache.borrowpercent, which is considerably
>above minimum.
>

At this moment, we already have disk activity and significant CPU load.
Perhaps we should have another, even higher threshold, below which we
would swap when the system is idle.

>> filesystems: performance improves if info is already in memory
(cache)
>> (virtual) memory: performance improves if info is already in memory

>
>The global target of a memory management system is to limit
>the number of I/Os that the system needs to do. But we have
>to take into account that FS I/O is often more expensive
>than swap I/O (need to lookup metadata, data is scattered
>all over the disk, etc...).
>

Is this true if we consider interactive performance as a goal, too ?
Let me give an example:
File I/O usually happens as a result of the user's explicit request
(menu selection, etc. ...),
while paging may happen every time, even if the user expects no delay (
getting the windowmanager's menu )

>> (Note: "pressure" is a good analogy. If you want to minimize the
>> "energy" of the system, then the "pressure" should be like "dE/dx".
In
>> my example, dx=4k. Coming up with some common measure of E could be
>> hard... but I would just suggest the total number of disk reads
>> predicted by the page-aging statistics.... if that would work.)
>
>We could use the 'pressure' model by:
>- - counting the total number of page faults / megabyte
>- - steal from a program or the cache when it has less than
> the average number of faults/megabyte
>- - leave a program alone when it has more than the average
> number of faults/megabyte
>

We should make adjustments for the cost of an I/O operation.
Getting a page from a ZIP-drive is slower than from harddisk, NFS may be
worse.
This requires however to shrink the cache of each deveice seperately.
In an ideal world we might consider balancing I/O over all deveices.
In addition the cost may be a function of load, as for example having
our only task waiting for a CD to spin up is bad, but if we have ten
tasks demanding CPU it is of little consequence.
That is under light load the cost is probably determined by disk seeking
time and under heavy load the percentage of I/O throughput requiered and
the CPU usage due to I/O. Could this be somehow measured by the kernel
or do we have a need for further tuning parameters ?
Even cooler would be a factor setable by syscall allowing let's say a
windowmanager to tell the kernel to take less pages from the task
controlling the window under focus, or to spare tasks just redrawing
their windows. If we could combine this with adaptive scheduling we
might get interactive performance to new highs.

>Of course, these numbers need to be averaged over a
>x second period, with 1<x<5 (or something like that).

Just my 0.02 DM - no symbol available ;-(

Oliver Neukum

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu