Re: Performance Characteristics of All Linux RAIDs (mdadm/bonnie++)
From: Chris Snook
Date: Wed May 28 2008 - 11:40:49 EST
Justin Piszcz wrote:
Hardware:
1. Utilized (6) 400 gigabyte sata hard drives.
2. Everything is on PCI-e (965 chipset & a 2port sata card)
Used the following 'optimizations' for all tests.
# Set read-ahead.
echo "Setting read-ahead to 64 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3
# Set stripe-cache_size for RAID5.
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size
# Disable NCQ on all disks.
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
echo "Disabling NCQ on $i"
echo 1 > /sys/block/"$i"/device/queue_depth
done
Given that one of the greatest benefits of NCQ/TCQ is with parity RAID,
I'd be fascinated to see how enabling NCQ changes your results. Of
course, you'd want to use a single SATA controller with a known good NCQ
implementation, and hard drives known to not do stupid things like
disable readahead when NCQ is enabled.
-- Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/