i wanted another primary partition.  my (rhel5.4) VG has lots of free
space.  i want to remote-install rhel6 beta 2 from harddisk, seems an
rhel6 askmethod install gets stuck initializing the nics.

so i shrunk the big PV with pvresize, shrunk the underlying raid1 with
mdadm, so far so good, but presumably the next step went awry, i
removed and recreated the underlying partitions with parted in attempt
to shrink them.  to cut to the chase, now the system boots and the LV
filesystem's fine, but at the same time lvm doesn't seem to think it
has any VG, PV, or LV anymore.

so, what should i have done, what can i do now, and where should i be
asking these questions?

in more detail, i did
   pvresize --setphysicalvolumesize 280G /dev/md1       #(over)shrink
(regrow later to actual smaller size)
   mdadm --grow /dev/md1 --size=292000000               #(over)shrink
raid(in Kibibytes)
   parted
      sel /dev/sdb
         rm 2
        mkpart primary ext2 107MB 309970MB
           which gave error, so i presumed it had done nothing, but
        mkpart primary 107MB 309970MB
           this made it clear the prior mkpart had recreated the
partition afterall
         set 2 raid on
      sel /dev/sda
        rm 2
        mkpart primary 107MB 309970MB
           no fstype, no error
        set 2 raid on
   mdadm --grow /dev/md1
      mdadm: no changes to --grow

so i rebooted, with no problem, thinking perhaps parted's changes
weren't visible yet.  now:
  mdadm --query /dev/md1
     /dev/md1: is an md device which is not active
     /dev/md1: No md super block found, not an md component.
  mdadm --examine /dev/sda2
     mdadm: No md superblock detected on /dev/sda2.
  mdadm --examine /dev/sdb2
     mdadm: No md superblock detected on /dev/sdb2.
and vgs, pvs, and lvs all produce no output.

so, is there any way to recover or rebuild the PV/VG superblock?  is
this what i actually need to do?  i'd expect someone somewhere has
been here before, or at least knows better about this stuff than i
do..

also, if mdadm doesn't think there's a raid anymore, still perhaps
some layer in the kernel knows better, or are current filesystem
changes only going to sda perhaps?  how might i probe this?  or
recover from it?  mounted filesystems do bear the proper LV device
names fwiw..