Mike Miller cried from the depths of the abyss...

>> If you are right, that means the message I sent back in October about how 
>> to do the RAID1 did not work for me.
>
> Here's that message from October:
>
> http://shadowknight.real-time.com/pipermail/tclug-list/2011-October/061034.html
>
> Does anyone see anything wrong in it?

I didn't see anything wrong in your link.  Something might have gone bad 
when you converted the disks from GPT to MBR, but I don't know.  I went 
over the instructions I sent you last year, and cleaned them up a little. 
All checks out, so this should work fine.

Here ya go:
Try to do the raid prep/setup outside of the Ubuntu installer first (This 
is just my preference).

This is how I setup software RAID 1's, and this has worked every time for 
me.  I have to be honest I haven't done this on Ubuntu, but I did just 
load the latest Ubuntu live cd to check, and all the commands exist so 
this should work fine.  I have done this >30 times on Slackware, and a 
handful of times on Centos & Fedora.  I actually used a Slackware install 
disk to setup the raid's on Fedora & Centos, but this is not necessary. 
The Ubuntu disk will work just fine.

I personally like fdisk to create my partitions, but can use cfdisk (or 
anything else Ubuntu might have that you like).  One disk 1 (lets call it 
/dev/sda) Create at least two partitions (one for swap & one for /). 
Change the types on both partitions to "Linux RAID autodetect" type "FD".

I like to leave a coupe hundred megs free at the end of the disk just in 
case I need to replace one latter that isn't exactly the same size. This 
of course is optional.

Now copy your partitions to the 2nd drive (lets call it /dev/sdb) like:
sfdisk -d /dev/sda | sfdisk /dev/sdb

When this command finishes it will display your Raid disk/partition 
scheme.  both drives should match.

Next create your raid 1's
1st - root partition (or swap depending how you created your partitions)

mdadm --create /dev/md0 --level 1 --raid-devices 2 \
    /dev/sda1 /dev/sdb1 --metadata=0.90

Do the same for your other partition

mdadm --create /dev/md1 --level 1 --raid-devices 2 \
/dev/sda2 /dev/sdb2 --metadata=0.90

Now format your swap array (assuming your swap is /dev/md1)
mkswap /dev/md1

Now start your install like normal.  You should see /dev/md1 available for 
your swap, and /dev/md0 available for your root.

At this point I must take a step back.  I'm old school, and prefer LILO on 
my boxes.

For GRUB you will need to do the following post install (prior to reboot).

I am guessing that UBUNTU(not sure, don't really use it) will attempt to 
install grub for you(install in to your MBR).  If it does it most likely 
will fail (I've been surprised before though, and perhaps those sneaky 
people over at Ubuntu have this figured out).  If it fails that is OK. 
Let's just play it safe & assume it's all F-ed up, and we are going to 
make it right.

put grub on disk 1's MBR:
grub-install --root-directory=/boot /dev/sda

cd /boot/boot/grub

touch menu.lst

Create a menu.lst for GRUB.  I usually do something like this:

timeout 10
title Linux
root (hd0,0)
kernel /vmlinuz root=/dev/md0 ro
boot

(not sure the naming scheme for Ubuntu, so double check the kernel part & 
make sure to point the root= to the proper /dev/md# for your root 
partition)

Save this, and take a peek at your /etc/fstab file to make sure your swap 
& / are pointing to /dev/md0 & /dev/md1

/dev/md1         swap             swap        defaults         0   0
/dev/md0         /                ext3        defaults         1   1


FOR LILO (if anyone else uses it anymore besides me), here is how to 
handle that.

Same as with GRUB, this needs to be completed post-install, but prior to 
reboot - also installed to your disk 1 MBR.

Edit /etc/lilo.conf

add a new line with:

raid-extra-boot = mbr-only

change the "boot" option to point to your raid 1 partition like:

boot = /dev/md0

save & exit

issue the "lilo" command to rewrite it to the MBR.  Reboot & enjoy the 
ride.


Please note these notes only apply to RAID 1 setups.  RAID 0 & RAID 5 is 
similar, but the config is slightly different.  Just wanted to throw that 
out there.

issue:     mdadm --detail /dev/md[01]
this should show you a working raid 1 like so:

mdadm --detail /dev/md[01]
/dev/md0:
         Version : 00.90.03
   Creation Time : Thu Feb  5 06:53:49 2009
      Raid Level : raid1
      Array Size : 37012160 (35.30 GiB 37.90 GB)
   Used Dev Size : 37012160 (35.30 GiB 37.90 GB)
    Raid Devices : 2
   Total Devices : 2
Preferred Minor : 0
     Persistence : Superblock is persistent

     Update Time : Sun Apr  1 23:01:28 2012
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

            UUID : bc437f7e:296132cc:6681f900:c6d698d4
          Events : 0.8

     Number   Major   Minor   RaidDevice State
        0       3        1        0      active sync   /dev/hda1
        1       3       65        1      active sync   /dev/hdb1
/dev/md1:
         Version : 00.90.03
   Creation Time : Thu Feb  5 06:54:47 2009
      Raid Level : raid1
      Array Size : 1953408 (1907.95 MiB 2000.29 MB)
   Used Dev Size : 1953408 (1907.95 MiB 2000.29 MB)
    Raid Devices : 2
   Total Devices : 2
Preferred Minor : 1
     Persistence : Superblock is persistent

     Update Time : Sun Apr  1 19:18:19 2012
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

            UUID : b9b064c6:63055849:d60f5ccb:8d370dd1
          Events : 0.6

     Number   Major   Minor   RaidDevice State
        0       3        2        0      active sync   /dev/hda2
        1       3       66        1      active sync   /dev/hdb2

Good Luck!

Mr. B-o-B


--
"I want to learn the ways of the Source, and be a Jedi like my Father"