readahead: make context readahead more conservative

From: Fengguang Wu
Date: Thu Aug 08 2013 - 04:54:44 EST

This helps performance on moderately dense random reads on SSD.

Transaction-Per-Second numbers provided by Taobao:

QPS case
7536 disable context readahead totally
w/ patch: 7129 slower size rampup and start RA on the 3rd read
6717 slower size rampup
w/o patch: 5581 unmodified context readahead

Before, readahead will be started whenever reading page N+1 when it
happen to read N recently. After patch, we'll only start readahead
when *three* random reads happen to access pages N, N+1, N+2. The
probability of this happening is extremely low for pure random reads,
unless they are very dense, which actually deserves some readahead.

Also start with a smaller readahead window. The impact to interleaved
sequential reads should be small, because for a long run stream, the
the small readahead window rampup phase is negletable.

The context readahead actually benefits clustered random reads on HDD
whose seek cost is pretty high. However as SSD is increasingly used
for random read workloads it's better for the context readahead to
concentrate on interleaved sequential reads.

Another SSD rand read test from Miao

# file size: 2GB
# read IO amount: 625MB
sysbench --test=fileio \
--max-requests=10000 \
--num-threads=1 \
--file-num=1 \
--file-block-size=64K \
--file-test-mode=rndrd \
--file-fsync-freq=0 \
--file-fsync-end=off run

shows the performance of btrfs grows up from 69MB/s to 121MB/s,
ext4 from 104MB/s to 121MB/s.

Tested-by: Tao Ma <tm@xxxxxx>
Tested-by: Miao Xie <miaox@xxxxxxxxxxxxxx>
Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
mm/readahead.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- linux-next.orig/mm/readahead.c 2013-08-08 16:21:29.675286154 +0800
+++ linux-next/mm/readahead.c 2013-08-08 16:21:33.851286019 +0800
@@ -371,10 +371,10 @@ static int try_context_readahead(struct
size = count_history_pages(mapping, ra, offset, max);

- * no history pages:
+ * not enough history pages:
* it could be a random read
- if (!size)
+ if (size <= req_size)
return 0;

@@ -385,8 +385,8 @@ static int try_context_readahead(struct
size *= 2;

ra->start = offset;
- ra->size = get_init_ra_size(size + req_size, max);
- ra->async_size = ra->size;
+ ra->size = min(size + req_size, max);
+ ra->async_size = 1;

return 1;
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at