Monday, May 04, 2009

useful commands for linux RAID

mdadm --create /dev/md0 --level=5 --spare-devices=0 --raid-devices=3
/dev/sda1 /dev/sdb1 /dev/sdc1
( /dev/md0 raid device name, level=5 raid5, raid-devices=3 three hdd,
/dev/sda1… the hdd)

Then mke2fs it

mke2fs -b 4096 -R stride=8 /dev/md0
Mount it

how to break raid
Fail all the devices, then remove them, then stop the raid. eg
mdadm --stop /dev/md1
mdadm --manage /dev/mdfoo --fail /dev/sdfoo
mdadm --manage /dev/mdfoo --remove /dev/sdfoo
mdadm --manage --stop /dev/mdfoo
mdadm --query --detail /dev/md0 //query detail
mdadm --manage --set-faulty /dev/md1 /dev/sdc2 // force fail to test
mdadm /dev/md1 -r /dev/sdc2 // remove the failed disk
mdadm /dev/md1 -a /dev/sdc2 // set back when recoverred

mdadm --add /dev/md1 /dev/sdb3 // add a partition
mdadm --grow --raid-devices=4 /dev/md1


here is the setting made to NAS
-------------------------------------------------------
root@data df -h
Filesystem Size Used Available Use% Mounted on
rootfs 125.0M 65.3M 59.7M 52% /
/dev/root 125.0M 65.3M 59.7M 52% /
/dev/root 125.0M 65.3M 59.7M 52% /dev/.static/dev
udev 2.0M 76.0k 1.9M 4% /dev
/dev/md0 458.5G 1001.0M 434.2G 0% /home/public
/dev/sde2 229.2G 128.2M 217.4G 0% /media/sde2
tmpfs 251.8M 452.0k 251.3M 0% /var/volatile
tmpfs 251.8M 0 251.8M 0% /dev/shm
tmpfs 251.8M 0 251.8M 0% /media/ram

partition information:
================
root@data:/# fdisk -l

Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 30401 244196001 fd Linux raid autodetect

Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 30401 244196001 fd Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 30401 244196001 fd Linux raid autodetect
/dev/sdc2 30402 60801 244188000 83 Linux

Disk /dev/sdd: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 30401 244196001 fd Linux raid autodetect

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 30401 244196001 fd Linux raid autodetect
/dev/sde2 30402 60801 244188000 83 Linux

RAID detail:
========
root@data:/# mdadm --query --detail /dev/md0
/dev/md0:
Version : 00.91.03
Creation Time : Tue Jan 24 09:15:56 2034
Raid Level : raid5
Array Size : 488391808 (465.77 GiB 500.11 GB)
Device Size : 244195904 (232.88 GiB 250.06 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 24 13:31:34 2034
State : clean, recovering
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Reshape Status : 2% complete
Delta Devices : 2, (3->5)

UUID : bcf1613f:5de00cfa:8880d4eb:30bb47b4
Events : 0.4058

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 65 2 active sync /dev/sde1
3 8 33 3 active sync /dev/sdc1
4 8 49 4 active sync /dev/sdd1

RAID status:
=========
root@data:/#cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[3] sde1[2] sdd1[4] sdb1[1] sda1[0]
488391808 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
[>....................] reshape = 2.8% (6997760/244195904)
finish=428.4min speed=9224K/sec

unused devices: <none>

No comments: