Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev

From: Vladislav Bolkhovitin
Date: Tue Jun 30 2009 - 06:54:59 EST


Wu Fengguang, on 06/30/2009 05:04 AM wrote:
On Mon, Jun 29, 2009 at 11:37:41PM +0800, Vladislav Bolkhovitin wrote:
Wu Fengguang, on 06/29/2009 07:01 PM wrote:
On Mon, Jun 29, 2009 at 10:21:24PM +0800, Wu Fengguang wrote:
On Mon, Jun 29, 2009 at 10:00:20PM +0800, Ronald Moesbergen wrote:
... tests ...

We started with 2.6.29, so why not complete with it (to save additional
Ronald's effort to move on 2.6.30)?

2. Default vanilla 2.6.29 kernel, 512 KB read-ahead, the rest is default
How about 2MB RAID readahead size? That transforms into about 512KB
per-disk readahead size.
OK. Ronald, can you 4 more test cases, please:

7. Default vanilla 2.6.29 kernel, 2MB read-ahead, the rest is default

8. Default vanilla 2.6.29 kernel, 2MB read-ahead, 64 KB
max_sectors_kb, the rest is default

9. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
read-ahead, the rest is default

10. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
read-ahead, 64 KB max_sectors_kb, the rest is default
The results:
I made a blindless average:

N MB/s IOPS case

0 114.859 984.148 Unpatched, 128KB readahead, 512 max_sectors_kb
1 122.960 981.213 Unpatched, 512KB readahead, 512 max_sectors_kb
2 120.709 985.111 Unpatched, 2MB readahead, 512 max_sectors_kb
3 158.732 1004.714 Unpatched, 512KB readahead, 64 max_sectors_kb
4 159.237 979.659 Unpatched, 2MB readahead, 64 max_sectors_kb

5 114.583 982.998 Patched, 128KB readahead, 512 max_sectors_kb
6 124.902 987.523 Patched, 512KB readahead, 512 max_sectors_kb
7 127.373 984.848 Patched, 2MB readahead, 512 max_sectors_kb
8 161.218 986.698 Patched, 512KB readahead, 64 max_sectors_kb
9 163.908 574.651 Patched, 2MB readahead, 64 max_sectors_kb

So before/after patch:

avg throughput 135.299 => 138.397 by +2.3%
avg IOPS 986.969 => 903.344 by -8.5%

The IOPS is a bit weird.

Summaries:
- this patch improves RAID throughput by +2.3% on average
- after this patch, 2MB readahead performs slightly better
(by 1-2%) than 512KB readahead
and the most important one:
- 64 max_sectors_kb performs much better than 256 max_sectors_kb, by ~30% !
Yes, I've just wanted to point it out ;)

OK, now I tend to agree on decreasing max_sectors_kb and increasing
read_ahead_kb. But before actually trying to push that idea I'd like
to
- do more benchmarks
- figure out why context readahead didn't help SCST performance
(previous traces show that context readahead is submitting perfect
large io requests, so I wonder if it's some io scheduler bug)

Because, as we found out, without your http://lkml.org/lkml/2009/5/21/319 patch read-ahead was nearly disabled, hence there were no difference which algorithm was used?

Ronald, can you run the following tests, please? This time with 2 hosts, initiator (client) and target (server) connected using 1 Gbps iSCSI. It would be the best if on the client vanilla 2.6.29 will be ran, but any other kernel will be fine as well, only specify which. Blockdev-perftest should be ran as before in buffered mode, i.e. with "-a" switch.

1. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 patch with all default settings.

2. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 patch with default RA size and 64KB max_sectors_kb.

3. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 patch with 2MB RA size and default max_sectors_kb.

4. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 patch with 2MB RA size and 64KB max_sectors_kb.

5. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 patch and with context RA patch. RA size and max_sectors_kb are default. For your convenience I committed the backported context RA patches into the SCST SVN repository.

6. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with default RA size and 64KB max_sectors_kb.

7. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA size and default max_sectors_kb.

8. All defaults on the client, on the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA size and 64KB max_sectors_kb.

9. On the client default RA size and 64KB max_sectors_kb. On the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA size and 64KB max_sectors_kb.

10. On the client 2MB RA size and default max_sectors_kb. On the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA size and 64KB max_sectors_kb.

11. On the client 2MB RA size and 64KB max_sectors_kb. On the server vanilla 2.6.29 with Fengguang's http://lkml.org/lkml/2009/5/21/319 and context RA patches with 2MB RA size and 64KB max_sectors_kb.

(I guess, the results will be interesting not only to us, so I restored linux-kernel@)

Thanks,
Vlad

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/