[PATCH 2/3] readahead: reduce unnecessary mmap_miss increases

From: Wu Fengguang
Date: Fri Apr 29 2011 - 23:31:49 EST


From: Andi Kleen <ak@xxxxxxxxxxxxxxx>

The original INT_MAX is too large, reduce it to

- avoid unnecessarily dirtying/bouncing the cache line
- restore mmap read-around faster on changed access pattern

Background: in the mosbench exim benchmark which does multi-threaded
page faults on shared struct file, the ra->mmap_miss updates are found
to cause excessive cache line bouncing on tmpfs. The ra state updates
are needless for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).

Tested-by: Tim Chen <tim.c.chen@xxxxxxxxx>
Signed-off-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>
Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
---
mm/filemap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- linux-next.orig/mm/filemap.c 2011-04-23 09:01:44.000000000 +0800
+++ linux-next/mm/filemap.c 2011-04-23 09:17:21.000000000 +0800
@@ -1538,7 +1538,8 @@ static void do_sync_mmap_readahead(struc
return;
}

- if (ra->mmap_miss < INT_MAX)
+ /* Avoid banging the cache line if not needed */
+ if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
ra->mmap_miss++;

/*


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/