Sunday, January 27, 2013

How to Configure RAID 1 with LVM Volumes

This blog will describe how to enable RAID 1 using mdraid on volumes which include LVM volumes.

When Oracle Enterprise Linux is installed, the default configuration of the disks includes an LVM volume. The boot volume, /boot, remains a normal ext3 file system.

1. Initial Config


During installation, here is the default configuration:




Which gives us this:

[root@oel5-raid1-3 ~]# uname -r
2.6.32-300.10.1.el5uek

[root@oel5-raid1-3 ~]# cat /etc/enterprise-release
Enterprise Linux Enterprise Linux Server release 5.8 (Carthage)


[root@oel5-raid1-3 ~]# cat /etc/fstab
/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0



[root@oel5-raid1-3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      7.7G  2.4G  5.0G  32% /
/dev/sda1              99M   24M   71M  25% /boot
tmpfs                 495M     0  495M   0% /dev/shm



[root@oel5-raid1-3 ~]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1305    10377990   8e  Linux LVM

Disk /dev/dm-0: 8489 MB, 8489271296 bytes
255 heads, 63 sectors/track, 1032 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 2113 MB, 2113929216 bytes
255 heads, 63 sectors/track, 257 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-1 doesn't contain a valid partition table



[root@oel5-raid1-3 ~]# mount | grep ext3
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)

[root@oel5-raid1-3 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup00
  PV Size               9.90 GB / not usable 22.76 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              316
  Free PE               0
  Allocated PE          316
  PV UUID               5aMSJb-OALl-wztg-107U-bizd-wLB6-G25RcW

[root@oel5-raid1-3 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               9.88 GB
  PE Size               32.00 MB
  Total PE              316
  Alloc PE / Size       316 / 9.88 GB
  Free  PE / Size       0 / 0
  VG UUID               dnYd54-w9ZG-METW-V1lP-WizL-wj7A-ZxJ6SO

[root@oel5-raid1-3 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                6ric1n-Dtgg-uK4s-09KF-EUJv-zm3B-ok6UHn
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.91 GB
  Current LE             253
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                wpMR3Z-7CMn-c6Q9-GF6h-xh7g-dULW-AAVP5U
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.97 GB
  Current LE             63
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

[root@oel5-raid1-3 ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/mapper/VolGroup00-LogVol01         partition       2064376 0       -1

Basically - /boot is normal ext3, which / is an LVM volume and the swap is also on the LVM.


2. Add Second HDD


Add a second hard disk to the system. 

[root@oel5-raid1-3 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

3. Parition the second HDD


The second hard disk must have the same partition layout as the first. The easy way to do this is to use the sfdisk utility.

[root@oel5-raid1-3 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 1305 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *        63    208844     208782  83  Linux
/dev/sdb2        208845  20964824   20755980  8e  Linux LVM
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

4. Modify the Secondary Disk partitions to type RAID


Use the fdisk utility to modify the partitions on the second disk to type fd (RAID):


[root@oel5-raid1-3 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   83  Linux
/dev/sdb2              14        1305    10377990   8e  Linux LVM

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdb2              14        1305    10377990   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Use the partprobe utility to update the kernel with the partition type changes:


[root@oel5-raid1-3 ~]# partprobe /dev/sdb


Verify creation of the new partitions:

[root@oel5-raid1-3 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0   10485760 sda
   8        1     104391 sda1
   8        2   10377990 sda2
   8       16   10485760 sdb
   8       17     104391 sdb1
   8       18   10377990 sdb2
 253        0    8290304 dm-0
 253        1    2064384 dm-1

5. Create RAID 1 Arrays on the Second Disk


Let's now create the RAID 1 devices on the second disk:

[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities :
unused devices: <none>
[root@oel5-raid1-3 ~]# mdadm --create /dev/md1 --auto=yes --level=raid1 --raid-devices=2 missing /dev/sdb1
mdadm: array /dev/md1 started.
[root@oel5-raid1-3 ~]# mdadm --create /dev/md2 --auto=yes --level=raid1 --raid-devices=2 missing /dev/sdb2
mdadm: array /dev/md2 started.
[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1]
      10377920 blocks [2/1] [_U]

md1 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

unused devices: <none>

Note: missing is a keyword placeholder for sda, which we will add later.

Format md1 as ext3:

[root@oel5-raid1-3 ~]# mkfs.ext3 /dev/md1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
26104 inodes, 104320 blocks
5216 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.


6. Move the data from the LVM


Now, we move the data from /dev/sda2 to /dev/md2

[root@oel5-raid1-3 ~]# pvcreate /dev/md2
  Writing physical volume data to disk "/dev/md2"
  Physical volume "/dev/md2" successfully created
[root@oel5-raid1-3 ~]# vgextend VolGroup00 /dev/md2
  Volume group "VolGroup00" successfully extended

This command start the volume migration:

[root@oel5-raid1-3 ~]# pvmove -i 2 /dev/sda2 /dev/md2

This can take a while.

Now, we remove /dev/sda2:

[root@oel5-raid1-3 ~]# vgreduce VolGroup00 /dev/sda2
  Removed "/dev/sda2" from volume group "VolGroup00"
[root@oel5-raid1-3 ~]# pvremove /dev/sda2
  Labels on physical volume "/dev/sda2" successfully wiped

Now, convert the /dev/sda2 to a RAID device:

[root@oel5-raid1-3 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1305    10377990   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

Now, add it back as a RAID member of md2:

 mdadm --add /dev/md2 /dev/sda2

And monitor its progress using:

watch -n 2 cat /proc/mdstat

Every 2.0s: cat /proc/mdstat                            Sat Jan 26 20:04:01 2013

Personalities : [raid1]
md2 : active raid1 sda2[2] sdb2[1]
      10377920 blocks [2/1] [_U]
      [======>..............]  recovery = 31.7% (3292288/10377920) finish=0.6min
 speed=193664K/sec

md1 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

unused devices: <none>


Press Ctrl-C to exit watch once the re-build is done.


[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda2[0] sdb2[1]
      10377920 blocks [2/2] [UU]

md1 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

unused devices: <none>



7. Update fstab


The default /etc/fstab is:

/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

We need to change it to this:

/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
/dev/md1                /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

The change is this line

/dev/md1                /boot                   ext3    defaults        1 2

Replace "LABEL=/boot" with "/dev/md1"

8. Update grub.conf


The default /boot/grub/grub.conf is:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server (2.6.32-300.10.1.el5uek)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-300.10.1.el5uek ro root=/dev/VolGroup00/LogVol00 rhgb quiet numa=off
        initrd /initrd-2.6.32-300.10.1.el5uek.img
title Oracle Linux Server-base (2.6.18-308.el5)
        root (hd0,0)
        kernel /vmlinuz-2.6.18-308.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet numa=off
        initrd /initrd-2.6.18-308.el5.img

And we change it to:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
fallback=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title HDD1 (2.6.32-300.10.1.el5uek)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-300.10.1.el5uek ro root=/dev/VolGroup00/LogVol00 rhgb quiet numa=off
        initrd /initrd-2.6.32-300.10.1.el5uek.img
title HDD2 (2.6.32-300.10.1.el5uek)
        root (hd1,0)
        kernel /vmlinuz-2.6.32-300.10.1.el5uek ro root=/dev/VolGroup00/LogVol00 rhgb quiet numa=off
        initrd /initrd-2.6.32-300.10.1.el5uek.img

The key changes are:

(i) the addition of the fallback parameter
(ii) the addition of the second title, and its respective attributes.

I also updated the titles to reflect the device from which the system is being booted.

The default parameter is important too. It indicates the title from which the system will boot by default. If Grub can not find a valid /boot partition (e.g. in case of disk failure), then Grub will attempt to boot from the title indicated by fallback

9. Re-create Initial RAMDisk:


[root@oel5-raid1-3 ~]# cd /boot
[root@oel5-raid1-3 boot]# ll initrd*
-rw------- 1 root root 4372497 Jan 26 19:21 initrd-2.6.18-308.el5.img
-rw------- 1 root root 3934645 Jan 26 19:21 initrd-2.6.32-300.10.1.el5uek.img
[root@oel5-raid1-3 boot]# uname -r
2.6.32-300.10.1.el5uek
[root@oel5-raid1-3 boot]# mkinitrd -f -v initrd-2.6.32-300.10.1.el5uek.img 2.6.32-300.10.1.el5uek

The mkinitrd command takes the format of:

mkinitrd -v -f initrd-'uname-r'.img 'uname -r'

That's why it's important to grab uname -r.

10. Copy /Boot

[root@oel5-raid1-3 boot]# mkdir /mnt/boot.md1
[root@oel5-raid1-3 boot]# mount /dev/md1 /mnt/boot.md1
[root@oel5-raid1-3 boot]# cp -dpRxu /boot/* /mnt/boot.md1

This stage has to be done before the next (installing grub on both disks).

11. Install Grub on BOTH disks

It is very important to install Grub on BOTH disks!

[root@oel5-raid1-3 boot]# grub
Probing devices to guess BIOS drives. This may take a long time.


    GNU GRUB  version 0.97  (640K lower / 3072K upper memory)

 [ Minimal BASH-like line editing is supported.  For the first word, TAB
   lists possible command completions.  Anywhere else TAB lists the possible
   completions of a device/filename.]
grub> root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
grub> setup (hd0)
setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> root (hd1,0)
root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1)
setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> quit
quit

12. Reboot


[root@oel5-raid1-3 boot]# reboot

13. Add /dev/sda to /dev/md1


[root@oel5-raid1-3 ~]# mount | grep ext3
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
/dev/md1 on /boot type ext3 (rw)

So /dev/sda isn't mounted...

[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[1]
      104320 blocks [2/1] [_U]

md2 : active raid1 sdb2[1] sda2[0]
      10377920 blocks [2/2] [UU]

unused devices: <none>

And not used by /dev/md1...

So...

[root@oel5-raid1-3 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1305    10377990   fd  Linux raid autodetect

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14        1305    10377990   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

[Note the change of partition type, from 'Linux' to 'Linux raid autodetect']

[root@oel5-raid1-3 ~]# partprobe /dev/sda

[root@oel5-raid1-3 ~]# mdadm --manage --add /dev/md1 /dev/sda1
mdadm: added /dev/sda1
[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

md2 : active raid1 sdb2[1] sda2[0]
      10377920 blocks [2/2] [UU]

unused devices: <none>

14. Recreate the initial ram disk:


[root@oel5-raid1-3 ~]# cd /boot
[root@oel5-raid1-3 boot]# uname -r
2.6.32-300.10.1.el5uek
[root@oel5-raid1-3 boot]# mkinitrd -v -f initrd-2.6.32-300.10.1.el5uek.img 2.6.32-300.10.1.el5uek
Creating initramfs

15. Testing


To simulate loss of sdb, here we execute a software fault:

[root@oel5-raid1-3 ~]# mdadm --manage /dev/md1 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1
[root@oel5-raid1-3 ~]# mdadm --manage /dev/md2 --fail /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md2
[root@oel5-raid1-3 ~]# mdadm --manage /dev/md1 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1
[root@oel5-raid1-3 ~]# mdadm --manage /dev/md2 --remove /dev/sdb2
mdadm: hot removed /dev/sdb2

Shutdown the server, replace /dev/sdb. Start it up, and check the status of the raid:

[root@oel5-raid1-3 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[0]
      104320 blocks [2/1] [U_]

md2 : active raid1 sda2[0]
      10377920 blocks [2/1] [U_]

unused devices: <none>

And what's the status of the new hard disk:

[root@oel5-raid1-3 ~]# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14        1305    10377990   fd  Linux raid autodetect

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md2: 10.6 GB, 10626990080 bytes
2 heads, 4 sectors/track, 2594480 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

...



Copy the partition layout to the new disk:

[root@oel5-raid1-3 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 1305 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdb1   *        63    208844     208782  fd  Linux raid autodetect
/dev/sdb2        208845  20964824   20755980  fd  Linux raid autodetect
/dev/sdb3             0         -          0   0  Empty
/dev/sdb4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)


Clear any remnants of a previous RAID device on the new disk:

[root@oel5-raid1-3 ~]# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
[root@oel5-raid1-3 ~]# mdadm --zero-superblock /dev/sdb2
mdadm: Unrecognised md component device - /dev/sdb2

OK, now add the partitions to the respective md devices:

[root@oel5-raid1-3 ~]# mdadm --add /dev/md1 /dev/sdb1
mdadm: added /dev/sdb1
[root@oel5-raid1-3 ~]# mdadm --add /dev/md2 /dev/sdb2
mdadm: added /dev/sdb2

watch -n 2 cat /proc/mdstat

Wait for the re-synchronisation to complete, press Ctrl-C to exit.

Re-install grub on BOTH hard drives:

[root@oel5-raid1-3 ~]# grub
Probing devices to guess BIOS drives. This may take a long time.


    GNU GRUB  version 0.97  (640K lower / 3072K upper memory)

 [ Minimal BASH-like line editing is supported.  For the first word, TAB
   lists possible command completions.  Anywhere else TAB lists the possible
   completions of a device/filename.]
grub> root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> root (hd1,0)
root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1)
setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> quit
quit


And that's how you replace a disk on md raid!


No comments:

Post a Comment