🌑

Linhost.info

Configure Software RAID1 Array In Linux

Before we start you should know this tutorial assumes two things:

  • First - System were MDADM will be running is up to date
  • Second - Every step below will be performed with Root privileges or similar.

How Many Drives Are Available ?

We need to list the available drives in the system we wish to create the array, this task can be accomplished with the ls command:

ls /dev/sd*

user@ubuntu:~$ ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda5 /dev/sdb /dev/sdc /dev/sdd

Listed are five drives, /sda /1 /2 contain the Operating System, /sdb and /sdc are empty and will be used to create the RAID1 array, /sdd can be left alone or used as a hot-spare.

Drive Prepartions

First of all we should prepared the drives we want to include in the array, by preparing I am referring to partitioning each drive. For the purpose of partitioning we will be using a well known utility called fdisk. Repeat fdisk in every drive that will be part of the array, remember you will need to run the exact command and parameters. I’ll do my best to guide you through the required steps to format a drive with fdisk.

root@ubuntu:~# fdisk /dev/sdb

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-251658239, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-251658239, default 251658239):
Using default value 251658239

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

If you prefer a graphical interface, GParted is a good alternative to fdisk.

Is MDADM Installed In The System?

The answer is probably no!, if this is your case then I would recommend installing MDADM via the designated package manager for your Linux distribution. For Red Hat and derivatives you can use yum.

yum install mdadm

Debian and derivatives you can use.

apt-get install mdadm

Array Creation

If you carefully look at the command below you can see the creation of the array, RAID level selected, and the devices to be included in the array.

mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sdb1 /dev/sdc1

  • /dev/md0

    - is the name of our array

  • --level=1

    - equals RAID1 or mirror

  • --raid-devices=2

    - means the amount of required drives

You’ll be asked Continue creating array?, the answer is Yes. Now we add the new array to the systems mdadm configuration file mdadm.conf.

mdadm –detail –-scan >> /etc/mdadm.conf

Monitor the creation or rebuilt process of the designated array.

root@ubuntu:~# mdadm –detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun May 6 20:32:39 2012
Raid Level : raid1
Array Size : 125827000 (120.00 GiB 128.85 GB)
Used Dev Size : 125827000 (120.00 GiB 128.85 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sun May  6 20:33:10 2012
      State : active, resyncing

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Resync Status : 1% complete

       Name : ubuntu:0  (local to host ubuntu)
       UUID : 91f3904c:e3580ae7:a2b1cf77:b1be9efa
     Events : 1

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1

If you look at Resync Status : 4% complete is the status of our array, You can also monitor all arrays within the system. I consider this to be a more elegant display.

root@ubuntu:~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
125827000 blocks super 1.2 [2/2] [UU]
[>………………..] resync = 3.8% (4838272/125827000) finish=30.7min speed=65591K/sec

unused devices:

Grab a cup of coffee or a beer it might take a while before the operation completes.

Format The Array

Now that our array is complete we format the volume. Format like you would format any other drive.

mkfs -t ext3 /dev/md0

Mount The New Array

If you want to make use of your new array which from now on we’ll call volume we need to mount it first. Create a directory where we can mount it.

mkdir /media/volume0

Now provide the path to the array and path to the directory we create in the previous step.

mount /dev/md0 /media/volume0

Make sure our user can access the partition, by giving ownership of the partition to our regular user. Otherwise you will have to be root to create anything. user:user will changing according to your system.

chown user:user /media/volume0/

Thanks to the df command we can see our RAID protected volume, check the bottom of the output.

user@ubuntu:~$ df -H
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 54G 2.0G 49G 4% /
udev 186M 4.1k 186M 1% /dev
tmpfs 78M 308k 78M 1% /run
none 5.3M 0 5.3M 0% /run/lock
none 194M 0 194M 0% /run/shm
/dev/md0 127G 197M 121G 1% /media/volume0

However, if you want the volume to to survive a reboot you need to add the volume to the /etc/fstab configuration file. You can easily add the new array by making use of the echo command.

echo ‘/dev/md0 /media/volume0 ext3 noatime,rw 0 0’ >> /etc/fstab

Let’s see if our new entry is in the /etc/fstab file.

[root@iou ~]# cat /etc/fstab
#

/etc/fstab

Created by anaconda on Thu Mar 22 20:41:14 2012

#

Accessible filesystems, by reference, are maintained under ‘/dev/disk’

See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
UUID=489ea666-1374-41ba-b4ef-725c04693437 /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0

/dev/md0 /media/volume0 ext3 noatime,rw 0 0

There it is, now our array should be mount it every time we start the system. If you have questions or suggestion on how this tutorial could be improved leave a comment below, thank you.

, , , — May 7, 2012