Thanks BoB.  That got me running (after I cleaned up some of my own mess).
The problem I had running your commands was that, in following some
other directions, I'd removed and recreated the partition on /dev/sdc.
 The new one I created wasn't the correct size (too small).

Resolved that by;

sgdisk --backup=table /dev/sdb
sgdisk --load-backup=table /dev/sdc
sgdisk -G /dev/sdc

Once the --re-add command completed successfully, I was able to 'mdadm
--assemble --scan'

The array is rebuilding now;

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdb1[0] sde1[4] sdd1[2]
      8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
      [>....................]  recovery =  3.3% (97349984/2930135040)
finish=545.3min speed=86578K/sec
      bitmap: 9/22 pages [36KB], 65536KB chunk

unused devices: <none>

Thanks for the help, BoB and Marc

On Mon, Jun 2, 2014 at 1:12 PM, B-o-B De Mars <mr.chew.baka at gmail.com> wrote:
> On 6/1/2014 6:15 PM, Mark Mitchell wrote::
>
>> I built my first RAID array about a week ago, kinda vaguely
>> understanding what I'm doing.  4 3 TB drives in a RAID 5.  It's been
>> working fine, and I've been slowly filling it up.
>>
>> Took the machine apart today to upgrade case cooling and power supply.
>>   When I brought it back up, I got this in the boot messages;
>>
>> md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
>>
>> My 1st thought was that I didn't get everything hooked up (I had to
>> disconnect most of the drives to get the power connections on the
>> board), several reboots later, I'd confirmed that the system was
>> seeing all the drives;
>>
>> root at debian:/var/log# lsblk
>> NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
>> sda           8:0    0 931.5G  0 disk
>> ├─sda1        8:1    0    14G  0 part  /
>> ├─sda2        8:2    0  37.3G  0 part  [SWAP]
>> └─sda3        8:3    0 880.3G  0 part  /home
>> sdb           8:16   0   2.7T  0 disk
>> └─sdb1        8:17   0   2.7T  0 part
>>    └─md0       9:0    0   8.2T  0 raid5
>>      └─md0p1 259:0    0   8.2T  0 md    /srv/media
>> sdc           8:32   0   2.7T  0 disk
>> └─sdc1        8:33   0   2.7T  0 part
>> sdd           8:48   0   2.7T  0 disk
>> └─sdd1        8:49   0   2.7T  0 part
>>    └─md0       9:0    0   8.2T  0 raid5
>>      └─md0p1 259:0    0   8.2T  0 md    /srv/media
>> sde           8:64   0   2.7T  0 disk
>> └─sde1        8:65   0   2.7T  0 part
>>    └─md0       9:0    0   8.2T  0 raid5
>>      └─md0p1 259:0    0   8.2T  0 md    /srv/media
>> sr0          11:0    1   7.3G  0 rom
>>
>> So, it looks like the partition on sdc isn't being seen as a raid
>> partition anymore.
>>
>> FWIW;
>> root at debian:/var/log# cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid5 sdb1[0] sde1[4] sdd1[2]
>>        8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3]
>> [U_UU]
>>        bitmap: 4/22 pages [16KB], 65536KB chunk
>>
>> unused devices: <none>
>>
>> So, what's my next step?  Looks to me like I need to recreate sdc1 as
>> a raid partition, then regrow the array.  Is this correct?
>>
>> In the meantime, I'm reading what I can and copying data off the
>> array.  Nothing critical, but it'd be annoying to lose.
>>
>> Let me know if I left out required information.
>>
>
> do a mdadm --detail /dev/md0   (or md* for all), and look for the failed
> device\partition#
>
> *Adjust the below to your config*
>
> First fail the drive
> sudo mdadm /dev/md0 -f /dev/sdc1fdisk /dev/sdb
>
> Then remove the drive
> sudo mdadm /dev/md0 -r /dev/sdc1
>
> To re-add an out of sink partition
> I just "sudo mdadm --manage --re-add /dev/md0 /dev/sdc1"
>
> To check rebuilt status
>
> watch cat /proc/mdstat
>
> Done.
>
> Good Luck!
>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list