wild imbalance in kswapd state

Bill Hawes (whawes@star.net)
Mon, 20 Jul 1998 16:04:52 -0400


This is a multi-part message in MIME format.
--------------EA66FE80647BE14563BD1EEB
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I'm trying to figure out how to restore performance for repeated
page-cache intensive operations, and tried instrumenting the state
transitions in do_try_to_free_page.

The results were pretty interesting -- once the swapping starts, it
stays parked in the shrink_mmap state until almost all of the page cache
is gone. Some example results:

Jul 20 10:27:30 acer kernel: kswapd: leaving state 0, success=4865
Jul 20 10:27:30 acer kernel: kswapd: leaving state 1, success=0
Jul 20 10:27:30 acer kernel: kswapd: leaving state 2, success=0
Jul 20 10:27:30 acer kernel: kswapd: leaving state 0, success=59
Jul 20 10:27:30 acer kernel: kswapd: leaving state 1, success=0
Jul 20 10:27:30 acer kernel: kswapd: leaving state 2, success=6 <- swap
Jul 20 10:27:31 acer kernel: kswapd: leaving state 0, success=84
Jul 20 10:27:31 acer kernel: kswapd: leaving state 1, success=0
Jul 20 10:27:31 acer kernel: kswapd: leaving state 2, success=2 <- swap
Jul 20 10:27:31 acer kernel: kswapd: leaving state 0, success=3
Jul 20 10:27:31 acer kernel: kswapd: leaving state 1, success=0
Jul 20 10:27:31 acer kernel: kswapd: leaving state 2, success=0

In this case there isn't a lot of memory available elsewhere, but there
is some, and it doesn't seem very helpful to strip all the page cache
before trying elsewhere. I'd rather see at least a periodic attempt to
swap out.

I'm currently experimenting with a patch that sets a maximum number of
successes (e.g. 100) for the swap state and then forces a transition.
Have others experimented with something like this and have any pros/cons
to mention?

I've attached a copy of the current patch.

Regards,
Bill
--------------EA66FE80647BE14563BD1EEB
Content-Type: text/plain; charset=us-ascii; name="mm_vmscan109-patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="mm_vmscan109-patch"

--- linux-2.1.109/mm/vmscan.c.old Fri Jul 17 09:10:45 1998
+++ linux-2.1.109/mm/vmscan.c Mon Jul 20 15:38:42 1998
@@ -446,7 +446,7 @@
*/
static int do_try_to_free_page(int gfp_mask)
{
- static int state = 0;
+ static int state = 0, success = 0;
int i=6;
int stop;

@@ -457,24 +457,38 @@
stop = 3;
if (gfp_mask & __GFP_WAIT)
stop = 0;
-
- if (((buffermem >> PAGE_SHIFT) * 100 > buffer_mem.borrow_percent * num_physpages)
- || (page_cache_size * 100 > page_cache.borrow_percent * num_physpages))
+ /*
+ * If we're not in the shrink_mmap() state, check
+ * whether to borrow page or buffer cache.
+ */
+ if (state != 0 &&
+ (((buffermem >> PAGE_SHIFT) * 100 > buffer_mem.borrow_percent * num_physpages)
+ || (page_cache_size * 100 > page_cache.borrow_percent * num_physpages)))
shrink_mmap(i, gfp_mask);

switch (state) {
do {
case 0:
- if (shrink_mmap(i, gfp_mask))
+ if (success < 100 && shrink_mmap(i, gfp_mask)) {
+ success++;
return 1;
+ }
+ success = 0;
state = 1;
case 1:
- if ((gfp_mask & __GFP_IO) && shm_swap(i, gfp_mask))
+ if (success < 100 && (gfp_mask & __GFP_IO) &&
+ shm_swap(i, gfp_mask)) {
+ success++;
return 1;
+ }
+ success = 0;
state = 2;
case 2:
- if (swap_out(i, gfp_mask))
+ if (success < 100 && swap_out(i, gfp_mask)) {
+ success++;
return 1;
+ }
+ success = 0;
state = 3;
case 3:
shrink_dcache_memory(i, gfp_mask);

--------------EA66FE80647BE14563BD1EEB--

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html