Re: Problem recovering a failed RIAD5 array with 4-drives.

From: Lennart Sorensen
Date: Thu Jul 12 2007 - 12:44:16 EST


On Thu, Jul 12, 2007 at 08:49:15AM -0500, James wrote:
> My apologies if this is not the correct forum. If there is a better place to
> post this please advise.
>
>
> Linux localhost.localdomain 2.6.17-1.2187_FC5 #1 Mon Sep 11 01:17:06 EDT 2006
> i686 i686 i386 GNU/Linux
>
> (I was planning to upgrade to FC7 this weekend, but that is currently on hold
> because-)
>
> I've got a problem with a software RIAD5 using mdadm.
> Drive sdc failed causing sda to appear failed. Both drives where marked
> as 'spare'.
>
> What follows is a record of the steps I've taken and the results. I'm looking
> for some direction/advice to get the data back.
>
>
> I've tried a few cautions things to bring the array back up with the three
> good drives with no luck.
>
> The last thing attempted had some limited success. I was able to get all
> drives powered up. I checked the Event count on the three good drives and
> they were all equal. So I assumed it would be safe to do the following. I
> hope I was not wrong. I issued the following commands to try to bring the
> array into a usable state.
>
>
>
>
> []#
> mdadm --create --verbose /dev/md0 --assume-clean --level=raid5 --raid-devices=4 --spare-devices=0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Don't you want assemble rather than create if it already exists?

How did two drives fail at the same time? Are you running PATA drives
with two drives on a single cable? That is a no no for raid. PATA
drive failures often take out the bus and you never want two drives in a
single raid to share an IDE bus.

You probably want to try and assemble the non failed drives, and then
add in the new replacement drive afterwards, since after all it is NOT
clean. Hopefully the raid will accept back sda even though it appeared
failed. Then you can add the new sdc to resync the raid.

--
Len Sorensen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/