Re: ide performance limits

From: bug1 (bug1@netconnect.com.au)
Date: Wed Jun 14 2000 - 05:35:51 EST


Andre Hedrick wrote:
>
> Those are buffered and not from the disk
>
> use '-t'
>

Hi Andre,

If buffered means reading from drives buffer then i thought that would
be a good way to test the limits of the ide code on my system. I wasnt
after practicle benchmarks, i was trying to stress my system to see
where the bottlenecks are.

I recently gave up trying to get >2 hpt366 channels working concurrently
and got promise ultra/66 cards, my problems with hpt366 were at least
partly based on load. Im running on a dual 433 celeron (bp6) with 128MB
ram, contrary to many peoples opinions of the bp6 the only problems ive
had were hpt366 related.

>From the tests below i hope you can see that on my hardware at least
there appears to be an ide performance bottleneck. The bottleneck is
beyond what is achievable by one drive so it is only appearent when
using software raid or lvm.

With the tests below i cant get more than total read speed of 30MB/s
irrespective of how many drives i read from in parallel, but by using
hdparm -T simultaneously on two seperate drives i could could get
45MB/s, which would indicate an ide bottleneck of 90MB/s.

I am strugling to workout good testing methods, none of them give me the
result i expect... but thats what benchmarks are for right?

Do you have any suggestions on how to find IDE performance bottlenecks,
or am barking up the wrong tree ?

It could just be me, maybe im just jinxed, but there are others in
linux-raid who also have ide performance problems, and apparently scsi
raid doesnt have any performance problems (is that a good motivator for
you?)

I appreciate your advice

Glenn

P.S. im not trying to be critical of your code, im just trying to
working out how much performance is possible with ide raid

RAID0

4-way raid0 (/dev/hde, /dev/hdg, /dev/hdi, /dev/hdk)
/dev/md0:
 Timing buffer-cache reads: 128 MB in 1.67 seconds = 76.65 MB/sec
 Timing buffered disk reads: 64 MB in 2.09 seconds = 30.62 MB/sec

3-way raid0 (/dev/hde, /dev/hdg, /dev/hdi)
/dev/md0:
 Timing buffer-cache reads: 128 MB in 1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads: 64 MB in 2.15 seconds = 29.77 MB/sec

2-way raid0 (/dev/hde, /dev/hdg)
/dev/md0:
 Timing buffer-cache reads: 128 MB in 1.59 seconds = 80.50 MB/sec
 Timing buffered disk reads: 64 MB in 1.94 seconds = 32.99 MB/sec

1-way raid0 (/dev/hde)
/dev/md0:
 Timing buffer-cache reads: 128 MB in 1.51 seconds = 84.77 MB/sec
 Timing buffered disk reads: 64 MB in 3.76 seconds = 17.02 MB/sec

The drives individually

hdparm -Tt /dev/hde
 Timing buffer-cache reads: 128 MB in 1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads: 64 MB in 2.92 seconds = 21.92 MB/sec

hdparm -Tt /dev/hdg
 Timing buffer-cache reads: 128 MB in 1.55 seconds = 82.58 MB/sec
 Timing buffered disk reads: 64 MB in 2.90 seconds = 22.07 MB/sec

hdparm -Tt /dev/hdi
 Timing buffer-cache reads: 128 MB in 1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads: 64 MB in 3.33 seconds = 19.22 MB/sec

hdparm -Tt /dev/hdk
 Timing buffer-cache reads: 128 MB in 1.54 seconds = 83.12 MB/sec
 Timing buffered disk reads: 64 MB in 3.28 seconds = 19.51 MB/sec

> On Wed, 14 Jun 2000, bug1 wrote:
>
> > Ive been trying to overcome performance problems with software raid0,
> > currently under 2.[34] a single drive has better read performance than a
> > 4-way ide raid0 (striping), write performance seems to be limited to
> > about 30MB/s, for me ide raid just doesnt scale well at all. Ive
> > mentioned this on linux-raid mailing list and Ingo says that scsi scales
> > well acording to his tests. Others on the list also noted poor ide
> > performance.
> >
> > So... i think ide could have some performance limitations that are only
> > noticable when using multiple disks.
> >
> > I modified hdparm to use 1280MB for Timing buffer-cache-reads to do the
> > benchmark.
> >
> > I do hdparm -T /dev/hde i get 86MB/s then i do hdparm -T /dev/hdi i get
> > 86MB/s
> >
> > If i do them both at the same time i get 43MB/s for each.
> >
> > The drives are udma66 each on there own promise udma66 pci card,
> > detected and used by the kernel as udma66.
> >
> > Shouldnt the performance of these drives be independent of each other,
> > why would one drive slow the other down ?
> >
> > I run it on a dual 433 celeron with 128MB ram, when i run only one
> > instance of hdparm it uses 50% of cpu resources, when i run both
> > concurrently combined they use 100% (aprox). So i assume hdparm isnt
> > multi-thrreaded, given this i dont think cpu resource should be causing
> > the bottleneck.
> >
> > Could this be a software (kernel) limitation rather than hardware ?
> >
> >
> > Thanks
> >
> > Glenn
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.rutgers.edu
> > Please read the FAQ at http://www.tux.org/lkml/
> >
>
> Andre Hedrick
> The Linux ATA/IDE guy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Jun 15 2000 - 21:00:31 EST