Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

From: Nick Piggin
Date: Fri Mar 12 2004 - 09:30:08 EST




Mark_H_Johnson@xxxxxxxxxxxx wrote:




Nick Piggin <piggin@xxxxxxxxxxxxxxx> wrote:

Andrew Morton wrote:


That effect is to cause the whole world to be swapped out when people
return to their machines in the morning. Once they're swapped back in

the

first thing they do it send bitchy emails to you know who.

From a performance perspective it's the right thing to do, but nobody

likes

it.



Yeah. I wonder if there is a way to be smarter about dropping these
used once pages without putting pressure on more permanent pages...
I guess all heuristics will fall down somewhere or other.


Just a question, but I remember from VMS a long time ago that
as part of the working set limits, the "free list" was used to keep
pages that could be freely used but could be put back into the working
set quite easily (a "fast" page fault). Could you keep track of the
swapped pages in a similar manner so you don't have to go to disk to
get these pages [or is this already being done]? You would pull them
back from the free list and avoid the disk I/O in the morning.



Not too sure what you mean. If we've swapped out the pages, it is
because we need the memory for something else. So no.

One thing you could do is re read swapped pages when you have
plenty of free memory and the disks are idle.

By the way - with 2.4.24 I see a similar behavior anyway [slow to get
going in the morning]. I believe it is due to our nightly backup walking
through the disks. If you could FIX the retention of sequentially read
disk blocks from the various caches - that would help a lot more in
my mind.



updatedb really wants to be able to provide better hints to the VM
that it is never going to use these pages again. I hate to cater for
the worst possible case that only happens because everyone has it as
a 2am cron job.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/