Re: [VM PATCH] Faster reclamation of dirty pages and unused inode/dcache entries in 2.4.22

From: Antonio Vargas
Date: Fri Aug 29 2003 - 12:32:42 EST


On Fri, Aug 29, 2003 at 08:01:11AM -0700, Shantanu Goel wrote:
> Hi kernel hackers,
>
> The VM subsystem in Linux 2.4.22 can cause spurious
> swapouts in the presence of lots of dirty pages.
> Presently, as dirty pages are encountered,
> shrink_cache() schedules a writepage() and moves the
> page to the head of the inactive list. When a lot of
> dirty pages are present, this can break the FIFO
> ordering of the inactive list because clean pages
> further down the list will be reclaimed first. The
> following patch records the pages being laundered, and
> once SWAP_CLUSTER_MAX pages have been accumulated or
> the scan is complete, goes back and attempts to move
> them back to the tail of the list.
>
> The second part of the patch reclaims unused
> inode/dentry/dquot entries more aggressively. I have
> observed that on an NFS server where swap out activity
> is low, the VM can shrink the page cache to the point
> where most pages are used up by unused inode/dentry
> entries. This is because page cache reclamation
> succeeds most of the time except when a swap_out()
> happens.
>
> Feedback and comments are welcome.

Microoptimization (which helps on x86 a lot):

- for (i = nr_pages - 1; i >= 0; i--) {
- page = laundry[i];
+ laundry += nr_pages;
+ for (i = -nr_pages; ++i ;){
+ page = laundry[i];

Your original code reads from higher to lower addresses,
while the new one reads from lower to higer addresses.

The new code changes and then tests the loop counter,
so it's a little bit faster :)

Both check against zero, so both can use result flags
directly and do no intervening "cmp ecx,CONSTANT".

To the "powers that be", would this type of microoptimizations
be futher welcomed?

Greets, Antonio.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/