Re: Enterprise workload testing for storage and filesystems

From: Alan D. Brunelle
Date: Fri Nov 21 2008 - 11:19:35 EST


K.S. Bhaskar wrote:
> On 11/20/2008 04:37 PM, Jeff Moyer wrote:
>> James Bottomley <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx> writes:
>
> [KSB] <...snip...>
>
>> > Let's see how our storage and filesystem tuning measures up to this.
>>
>> This is indeed great news! The tool is very flexible, so I'd like to
>> know if we can get some sane configuration options to start testing.
>> I'm sure I can cook something up, but I'd like to be confident that what
>> I'm testing does indeed reflect a real-world workload.
>
> [KSB] Here are numbers for some tests that we ran recently:
>
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 1000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 10000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 100000 90 90 10 512
> io_thrash -o 4 4 testdb 4000000 100000 12 8192 512 200000 90 90 10 512
>
> Note that these are relatively modest tests (4x32GB database files, all
> on one file system, 12 processes). To simulate bigger loads, allow the
> journal file sizes to grow to 4GB, use a configuration file to spread
> the database and journal files on different file systems, take the
> number of processes up into the hundreds and database sizes into the
> hundreds of GB. To keep test times reasonable, use the smallest numbers
> that give insightful results (after a point, making things bigger adds
> more time, but does not yield additional insights into system behavior,
> which is what we are trying to achieve).
>
> Regards
> -- Bhaskar

Thanks for additional feedback Bhaskar - I've been playing with this
on-and-off the last couple of days trying to stress one testbed (16 way
AMD, 128GB RAM, two P800 Smart Arrays (48 disks total put into a single
LVM2/DM volume)). I've been able to get the I/O subsystem 100% utilized,
but in so doing really didn't stress the system (something like 80-90%
idle).

In order to stress the whole system, it sounds like it _may_ be better
to use 48 separate file systems on 48 separate platters (each with its
own DB)? Or are there other knobs to play with to get more of the system
involved besides the I/O? Is it a good idea to separate the journals
from the DB (separate FS/platter)?

Regards,
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/