Re: 2.1.130 mem usage.

Andrea Arcangeli (andrea@e-mind.com)
Wed, 2 Dec 1998 20:12:44 +0100 (CET)


On Wed, 2 Dec 1998, Stephen C. Tweedie wrote:

>Drop the 32 instead. Having the age field forces us to do more passes

Just dropped yesterday of course ;)

>That is exactly what we want: the more such pages we find, the more
>pages we want to scan so that we can reclaim them more easily on the

count_min is a function of priority. With a low priority we want to try on
many freeable pages. I think the test should be completly reversed. I am
using the test reversed here and works fine. Checking for the page->count
== 1 is a good thing though, since I want to try count_min time on
_freeable_ pages (and not on not freeable ones). Stephen could you try
how your system works with this "my" code:

...
count_max = (limit<<1) >> (priority>>1);
count_min = (limit<<1) >> (priority);
...
/*
* If the page was freeable but age blocked us.
*/
if ((page->inode || page->buffers) &&
atomic_read(&page->count) == 1)
count_min--;
...

Really the comment is not perfect since it could happens that the pgcache
or buffercache are under min...

>next pass. As the comment says, if we start aggressively hammering the
>page cache, then this algorithm naturally starts to age cached pages
>more rapidly. If the cache is already very small, then we can abort the
>cache loop after having spent a bit of effort looking for, but not
>finding, reusable cache pages. That is self-balancing behaviour.

I still can't understand right now your point for try count_min times on
not freeable pages, and I can' t continue to think on it now because I
have not a lot of time for linux developing today (near exames :-(). I' ll
continue this night or tomorrow...

>> - free_page_and_swap_cache(page);
>> + free_page(page);
>
>> Doing this we are not swapping out really I think, because the page now is
>> also on the hd, but it' s still in memory and so shrink_mmap() will have
>> the double of the work to do.
>
>Precisely: by forcing all the real reclaiming work to be done in one
>place, we again try to make the system self-balancing.

OK, but then you should recall shrink_mmap a bit after every swapout,
since with the new code you tell to do_try_to_free_pages() that you have
just freed a page fine, but really the page is still there (because it' s
in the swap cache). And more badly when do_try_to_free_pages() will run
again, it will continue to swapout and so the system will continue to
swapout but will not reclaim memory until shrink_mmap() will run. I think
this is the reason for the excessive swapping of 2.1.130. I just reveresed
the vmscan 2.1.130 changes and infact the mm here is perfect with 64mbyte
of RAM (really it was pretty good also before, but I was always moving the
static point of do_try_to_free_pages() by hand if the cache was too big).
I think the changes should be reversed also in the official tree, because
probably it' s a bit late to change how do_try_to_free_pages works. In the
current do_try_to_free_pages() there are no bugs, everything is fine and I
can' t see why to improve/play with it today (while the current scsi code
will hang after ~0UL jiffies from the first scsi reset).

Andrea Arcangeli

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/