to be clear:

you are using high point hardware raid 5 on each controller.

so for conroller A - you have 4 250gb drives attached to it, in raid 5 
with 1 hot spare = 750gb usable.

controller B is the same setup.

then, inside of CentOS  - controller A raid5 volume shows up as /dev/sda 
and conroller B raid5 volume shows up as /dev/sdb.

then you create a raid 0 stripe of /dev/sda and /dev/sdb which would 
yield 1.5tb (approx) disk space.

then some disks stopped working and you have swapped out the power 
supply etc.

conrtoller A has one disk dead but the hot spare recovered so things are 
good on controller A (expect for now you don't have a hot spare) and 
CenOS sees /dev/sda just fine.

controller B has multiple disk failures and you only see 1 disk?  if 
this is the case i think your screwed.  controller B first has to be 
able to have enough good disks to build your hardware level raid5 
volume, which would then get presented to CentOS as /dev/sdb which is 
the other half of your CenOS raid-0 stripe.  until you can get at least 
2 disks running correctly on controller B - your raid-5 volume wont be 
there.  you need 2 of 3 at a minimum, which would be in a recovery 
state, and slow, but at least running.  then, if your CentOS raid-0 
won't mount you can start looking at that.

is this accurately - recapped?  if not, please clarify. 





TW Woodward wrote:
> Hello!
>
> I need some advice.  Here is what happened:
>
> The set-up:
>
> Two 4 port High Point 1740 SATA controllers.  Each controller had four 
> 250GB drives attached to it.  Each controller was set-up as a separate 
> RAID5 with one spare drive.  In Centos 5 Linux, this created two drives 
> in /dev, sda and sdb.
>
> I needed a large storage space, without (as I thought) a high level of 
> safety.  So I partitioned the drives in parted and created sda1 and 
> sdb1.  I then used mdadm to create a RAID0 system across these two 
> drives.  The RAID device is called md0.
>
> Everything worked fine.  It was doing exactly what it was designed for, 
> which was a large (1TB) temp storage space.  Then people saw that it was 
> a large storage space, so they started storing semi-critical data 
> (images) there.  They stored a lot of this data 300+ GB.  Then we had a 
> power surge.  The power surge damaged the power supply.  It wasn't 
> damaged enough to just drop the motherboard or fry a drive, it acted 
> slowly.  It started with one drive on the second controller.  It worked 
> intermittently during one week (while I was on vacation).  When I got 
> back to work on the machine, another drive dropped, then another in 
> quick succession.
>
> So I replaced the power supply and hooked the drives back up.  Two of 
> the three dropped drives came back.  One of the drives was completely 
> dead.  I was able to add a spare drive and use the High Point supplied 
> GUI to rebuild the RAID5 array.
>
> Here is where it stands:  sda and sdb appear in /dev.  That is good.  
> sda1 appears in /dev. that is good.  sdb1 does not appear in /dev.  That 
> is bad.  Apparently, the partition table was dropped on the second 
> array.  When I run mdadm to rebuild, it tells me there is only one drive 
> (sda1) in the array.  When I run parted, it cannot find any partition 
> information on sdb.
>
> So here is where I am at.  Does anybody know of a way to restore/rebuild 
> this partition table?  Are the tables identical in sda1 and sdb1?  What 
> I mean is, in a RAID0, are the tables written across the drives?  Can I 
> copy the table from sda1 to sdb?  How do I do that?
>
> Thanks in advance.  And I already know, it was a stupid set-up, it was a 
> frail system, etc., etc.  But before you let me have it, take this into 
> account:  the power surge was caused by the owner of the company 
> indiscriminately throwing circuit breakers  Even better:  he was 
> throwing circuit breakers with an electrician because they were trying 
> to determine how to run power down to the new data center.
>
> TW
>
>
>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>