Re: SATA RAID5 speed drop of 100 MB/s

From: Patrick Mau
Date: Sun Jun 24 2007 - 20:25:20 EST


On Mon, Jun 25, 2007 at 12:07:23AM +0200, Carlo Wood wrote:
> On Sun, Jun 24, 2007 at 12:59:10PM -0400, Justin Piszcz wrote:
> > Concerning NCQ/no NCQ, without NCQ I get an additional 15-50MB/s in speed
> > per various bonnie++ tests.
>
> There is more going on than a bad NCQ implementation of the drive imho.
> I did a long test over night (and still only got two schedulers done,
> will do the other two tomorrow), and the difference between a queue depth
> of 1 and 2 is DRAMATIC.
>
> See http://www.xs4all.nl/~carlo17/noop_queue_depth.png
> and http://www.xs4all.nl/~carlo17/anticipatory_queue_depth.png

Hi Carlo,

Have you considered using "blktrace" ?

It enables you to gather data of all seperate requests queues
and will also show you the mapping of bio request from /dev/mdX
to the individual physical disk.

You can also identify SYNC and BARRIER flags for requests,
that might show you why the md driver will sometimes wait
for completion or even REQUEUE if the queue is full.

Just compile your kernel with CONFIG_BLK_DEV_IO_TRACE
and pull the "blktrace" (and "blockparse") utility with git.

The git URL is in the Kconfig help text.

You have to mount, debugfs (automatically selected by IO trace).
I just want to mention, because I did not figure it at first ;)

You should of course use a different location for the output
files to avoid an endless flood of IO.

Regards,
Patrick

PS: I know, I talked about blktrace twice already ;)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/