แสดงบทความที่มีป้ายกำกับ mdadm แสดงบทความทั้งหมด
แสดงบทความที่มีป้ายกำกับ mdadm แสดงบทความทั้งหมด

วันจันทร์ที่ 3 มีนาคม พ.ศ. 2557

Replace failure harddisk on SW raid 1

In example, Server build software raid 1 with two harddisk, sda and sdb. In this case, sdb fail. Below step show how to replace new harddisk on mdadm raid configuration.

1. Check sdb partition member on raid group  
$ cat /proc/mdstat

####################
 root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb6[2] sda6[0]
      6917056 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[2] sda1[0]
      487104 blocks super 1.2 [2/2] [UU]

unused devices:
####################
You will see sdb1 was a member of md0 and sdb6 was a member of md1

2. Mark sdb partition to failure
$ mdadm --manage /dev/md0 --fail /dev/sdb1
$ mdadm --manage /dev/md1 --fail /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

root@raid:/# mdadm --manage /dev/md1 --fail /dev/sdb6
mdadm: set /dev/sdb6 faulty in /dev/md1

root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb6[2](F) sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[2](F) sda1[0]
      487104 blocks super 1.2 [2/1] [U_]

unused devices:

root@raid:/#
####################
You will see the member of md0 and md1 change to [2/1] [U_]

3. Remove sdb1 and sdb6 from raid group
$ mdadm --manage /dev/md0 --remove /dev/sda1
$ mdadm --manage /dev/md1 --remove /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
root@raid:/# mdadm --manage /dev/md1 --remove /dev/sdb6
mdadm: hot removed /dev/sdb6 from /dev/md1

root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
      487104 blocks super 1.2 [2/1] [U_]

unused devices:

####################
You will see partition sdb1 and sdb6 already remove from raid group

4. Shutdown and change new harddisk


4.1 If new harddisk have old data you need to clear data first
$ dd if=/dev/zero bs=1M count=1 of=/dev/sdb; sync

5. After turn on check you new harddisk already install completely 

$ fdisk -l /dev/sdb

####################
root@raid:/# fdisk -l /dev/sdb

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
root@raid:/#
####################

6. Before we add new harddisk to raid member group we need to clone partition from sda and confirm partition must be same as sda
$ sfdisk -d /dev/sda | sfdisk /dev/sdb

####################
root@raid:/# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
OK....
...
root@raid:/# fdisk -l /dev/sdb

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048      976895      487424   fd  Linux raid autodetect
/dev/sdb2          978942    16775167     7898113    5  Extended
/dev/sdb5          978944     2930687      975872   82  Linux swap / Solaris
/dev/sdb6         2932736    16775167     6921216   fd  Linux raid autodetect
####################

7. Add sdb partition to raid group
$ mdadm --manage /dev/md0 --add /dev/sdb1
$ mdadm --manage /dev/md1 --add /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1
root@raid:/# mdadm --manage /dev/md1 --add /dev/sdb6
mdadm: added /dev/sdb6
root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[2] sda1[0]
      487104 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb6[2] sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  1.8% (130816/6917056) finish=0.8min speed=130816K/sec

unused devices:
####################

8. Install grub on sdb partition 
$ grub-install /dev/sdb

####################
root@raid:/# grub-install /dev/sdb
Installation finished. No error reported.
root@raid:/#
####################

Remark: You can uninstall grub by:
$ dd if=/dev/zero of=/dev/sdb bs=446 count=1

9. For confirm grub is install on partition, it have two ways
- by command 
$ dd if=/dev/sdb  bs=512  count=1 2>/dev/null | hexdump -C
the result you will found GRUB string on output, it mean you grub already install on sdb correctly

- by script "Boot Info Script", is develop by ghulselmans, you can download on URL
http://sourceforge.net/projects/bootinfoscript/
after download and extract just run ./bootinfoscript it will show grub information in file RESULTS.txt

10. That all for change new harddisk but wait It still have SWAP partition remain, In my case I make SWAP partition 1 GB on sda5 and sdb5 but after change disk found SWAP partition of sdb5 not activate.
####################
root@raid:/# free -m
             total       used       free     shared    buffers     cached
Mem:           495         89        406          0         10         39
-/+ buffers/cache:         40        455
Swap:          952          0        952
root@raid:/#
####################

11. Because new harddisk drive the UUID of drive will change we need to make SWAP and change the UUID in fstab file
$ mkswap /dev/sdb5

####################
root@raid:/# mkswap /dev/sdb5
Setting up swapspace version 1, size = 975868 KiB
no label, UUID=815d087f-0875-494a-a947-2220fd14a12d
root@raid:/#
####################

12. Copy you new UUID replace in /etc/fstab 
In my case change from 

# swap was on /dev/sdb5 during installation
UUID=0f049d6d-506c-4bdb-9fec-1a6b808fbb65 none            swap    sw              0       0

to

# swap was on /dev/sdb5 during installation
UUID=815d087f-0875-494a-a947-2220fd14a12d none            swap    sw              0       0

13. Activate swap partition sdb5
$ swapon /dev/sdb5

####################
root@raid:/# swapon /dev/sdb5
root@raid:/# free -m
             total       used       free     shared    buffers     cached
Mem:           495         93        402          0         10         42
-/+ buffers/cache:         40        455
Swap:         1905          0       1905
root@raid:/#

วันอังคารที่ 3 เมษายน พ.ศ. 2555

LiveCD software raid starting at md125


If you boot existing with live cd it may change the device name (/dev/mdx) starting from 125. This can prevent system from starting up normally.(/etc/fstab still use the old /dev/mdx name starting from md1). To rename software raid, stop and assemble it. :

This will rename md125 ... 127 to md1 ... md3 respectively

#mdadm --stop /dev/md125
#mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1
#mdadm --stop /dev/md126
#mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2
#mdadm --stop /dev/md127
#mdadm --assemble /dev/md3 /dev/sda3 /dev/sdb3

วันพฤหัสบดีที่ 9 มิถุนายน พ.ศ. 2554

mdadm cheat sheet

Mdadm Cheat Sheet

1. Create a new RAID array

Create (mdadm –create) is used to create a new array:
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2
or using the compact notation:
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1

2. /etc/mdadm.conf

/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:
mdadm --detail --scan >> /etc/mdadm.conf
or on debian
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Remove a disk from an array

We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
mdadm --fail /dev/md0 /dev/sda1
and now we can remove it:
mdadm --remove /dev/md0 /dev/sda1

This can be done in a single step using:
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

4. Add a disk to an existing array

We can add a new disk to an array (replacing a failed one probably):
mdadm --add /dev/md0 /dev/sdb1


5. Verifying the status of the RAID arrays

We can check the status of the arrays on the system with:
cat /proc/mdstat
or
mdadm --detail /dev/md0

The output of this command will look like:

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]

here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk -

Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
watch cat /proc/mdstat

6. Stop and delete a RAID array

If we want to completely remove a raid array we have to stop if first and then remove it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0

and finally we can even delete the superblock from the individual drives:
mdadm --zero-superblock /dev/sda

#Clone partition table
sfdisk -d /dev/sda | sfdisk /dev/sdb

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm –help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.

Reference :
http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/