RAID is a method of configuring multiple hard drives to act as one, reducing the probability of catastrophic data loss in case of drive failure. RAID is implemented in either software (where the operating system knows about both drives and actively maintains both of them) or hardware (where a special controller makes the OS think there's only one drive and maintains the drives 'invisibly').
The RAID software included with current versions of Linux (and Ubuntu) is based on the 'mdadm' driver and works very well, better even than many so-called 'hardware' RAID controllers. This section will guide you through installing Ubuntu Server Edition using two RAID1 partitions on two physical hard drives, one for / and another for swap.
Follow the installation steps until you get to the Partition disks step, then:
-
Select Manual as the partition method.
-
Select the first hard drive, and agree to "Create a new empty partition table on this device?".
Repeat this step for each drive you wish to be part of the RAID array.
-
Select the "FREE SPACE" on the first drive then select "Create a new partition".
-
Next, select the Size of the partition. This partition will be the swap partition, and a general rule for swap size is twice that of RAM. Enter the partition size, then choose Primary, then Beginning.
-
Select the "Use as:" line at the top. By default this is "Ext3 journaling file system", change that to "physical volume for RAID" then "Done setting up partition".
-
For the / partition once again select "Free Space" on the first drive then "Create a new partition".
-
Use the rest of the free space on the drive and choose Continue, then Primary.
-
As with the swap partition, select the "Use as:" line at the top, changing it to "physical volume for RAID" then choose "Done setting up partition".
-
Repeat steps three through eight for the other disk and partitions.
With the partitions setup the arrays are ready to be configured:
-
Back in the main "Partition Disks" page, select "Configure Software RAID" at the top.
-
Select "yes" to write the changes to disk.
-
Choose "Create MD drive".
-
For this example, select "RAID1", but if you are using a different setup choose the appropriate type (RAID0 RAID1 RAID5).
In order to use RAID5 you need at least three drives. Using RAID0 or RAID1 only two drives are required.
-
Enter the number of active devices "2", or the amount of hard drives you have, for the array. Then select "Continue".
-
Next, enter the number of spare devices "0" by default, then choose "Continue".
-
Choose which partitions to use. Generally they will be sda1, sdb1, sdc1, etc. The numbers will usually match and the different letters correspond to different hard drives.
For the swap partition choose sda1 and sdb1. Select "Continue" to go to the next step.
-
Repeat steps three through seven for the / partition choosing sda2 and sdb2.
-
Once done select "Finish".
There should now be a list of hard drives and RAID devices. The next step is to format and set the mount point for the RAID devices. Treat the RAID device as a local hard drive, format and mount accordingly.
-
Select the RAID1 device #0 partition.
-
Choose "Use as:". Then select "swap area", then "Done setting up partition".
-
Next, select the RAID1 device #1 partition.
-
Choose "Use as:". Then select "Ext3 journaling file system".
-
Then select the "Mount point" and choose "/ - the root file system". Change any of the other options as appropriate, then select "Done setting up partition".
-
Finally, select "Finish partitioning and write changes to disk".
If you choose to place the root partition on a RAID array, the installer will then ask if you would like to boot in a degraded state. See „Degraded RAID” for further details.
The installation process will then continue normally.
At some point in the life of the computer a disk failure event may occur. When this happens, using Software RAID, the operating system will place the array into what is known as a degraded state.
If the array has become degraded, due to the chance of data corruption, by default Ubuntu Server Edition will boot to initramfs after thirty seconds. Once the initramfs has booted there is a fifteen second prompt giving you the option to go ahead and boot the system, or attempt manual recover. Booting to the initramfs prompt may or may not be the desired behavior, especially if the machine is in a remote location. Booting to a degraded array can be configured several ways:
-
The dpkg-reconfigure utility can be used to configure the default behavior, and during the process you will be queried about additional settings related to the array. Such as monitoring, email alerts, etc. To reconfigure mdadm enter the following:
sudo dpkg-reconfigure mdadm
-
The dpkg-reconfigure mdadm process will change the
/etc/initramfs-tools/conf.d/mdadm
configuration file. The file has the advantage of being able to pre-configure the system's behavior, and can also be manually edited:BOOT_DEGRADED=true
The configuration file can be overridden by using a Kernel argument.
-
Using a Kernel argument will allow the system to boot to a degraded array as well:
-
When the server is booting press ESC to open the Grub menu.
-
Press "e" to edit your Kernel command options.
-
Press the DOWN arrow to highlight the kernel line.
-
Press the "e" key again to edit the kernel line.
-
Add "bootdegraded=true" (without the quotes) to the end of the line.
-
Press "ENTER".
-
Finally, press "b" to boot the system.
-
Once the system has booted you can either repair the array see „RAID Maintenance” for details, or copy important data to another machine due to major hardware failure.
The mdadm utility can be used to view the status of an array, add disks to an array, remove disks, etc:
-
To view the status of an array, from a terminal prompt enter:
sudo mdadm -D /dev/md0
The -D tells mdadm to display detailed information about the
/dev/md0
device. Replace/dev/md0
with the appropriate RAID device. -
To view the status of a disk in an array:
sudo mdadm -E /dev/sda1
The output if very similar to the mdadm -D command, adjust
/dev/sda1
for each disk. -
If a disk fails and needs to be removed from an array enter:
sudo mdadm --remove /dev/md0 /dev/sda1
Change
/dev/md0
and/dev/sda1
to the appropriate RAID device and disk. -
Similarly, to add a new disk:
sudo mdadm --add /dev/md0 /dev/sda1
Sometimes a disk can change to a faulty state even though there is nothing physically wrong with the drive. It is usually worthwhile to remove the drive from the array then re-add it. This will cause the drive to re-sync with the array. If the drive will not sync with the array, it is a good indication of hardware failure.
The /proc/mdstat
file also contains useful information about the system's RAID devices:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
10016384 blocks [2/2] [UU]
unused devices: <none>
The following command is great for watching the status of a syncing drive:
watch -n1 cat /proc/mdstat
Press Ctrl+c to stop the watch command.
If you do need to replace a faulty drive, after the drive has been replaced and synced, grub will need to be installed. To install grub on the new drive, enter the following:
sudo grub-install /dev/md0
Replace /dev/md0
with the appropriate array device name.
The topic of RAID arrays is a complex one due to the plethora of ways RAID can be configured. Please see the following links for more information:
Logical Volume Manger, or LVM, allows administrators to create logical volumes out of one or multiple physical hard disks. LVM volumes can be created on both software RAID partitions and standard partitions residing on a single disk. Volumes can also be extended, giving greater flexibility to systems as requirements change.
A side effect of LVM's power and flexibility is a greater degree of complication. Before diving into the LVM installation process, it is best to get familiar with some terms.
-
Volume Group (VG): contains one or several Logical Volumes (LV).
-
Logical Volume (LV): is similar to a partition in a non-LVM system. Multiple Physical Volumes (PV) can make up one LV, on top of which resides the actual EXT3, XFS, JFS, etc filesystem.
-
Physical Volume (PV): physical hard disk or software RAID partition. The Volume Group can be extended by adding more PVs.
As an example this section covers installing Ubuntu Server Edition with /srv
mounted
on a LVM volume. During the initial install only one Physical Volume (PV) will be part of the Volume Group (VG). Another
PV will be added after install to demonstrate how a VG can be extended.
There are several installation options for LVM, "Guided - use the entire disk and setup LVM" which will also allow you to assign a portion of the available space to LVM, "Guided - use entire and setup encrypted LVM", or Manually setup the partitions and configure LVM. At this time the only way to configure a system with both LVM and standard partitions, during installation, is to use the Manual approach.
-
Follow the installation steps until you get to the Partition disks step, then:
-
At the "Partition Disks screen choose "Manual".
-
Select the hard disk and on the next screen choose "yes" to "Create a new empty partition table on this device".
-
Next, create standard /boot, swap, and / partitions with whichever filesystem you prefer.
-
For the LVM /srv, create a new Logical partition. Then change "Use as" to "physical volume for LVM" then "Done setting up the partition".
-
Now select "Configure the Logical Volume Manager" at the top, and choose "Yes" to write the changes to disk.
-
For the "LVM configuration action" on the next screen, choose "Create volume group". Enter a name for the VG such as vg01, or something more descriptive. After entering a name, select the partition configured for LVM, and choose "Continue".
-
Back at the "LVM configuration action" screen, select "Create logical volume". Select the newly created volume group, and enter a name for the new LV, for example srv since that is the intended mount point. Then choose a size, which may be the full partition because it can always be extended later. Choose "Finish" and you should be back at the main "Partition Disks" screen.
-
Now add a filesystem to the new LVM. Select the partition under "LVM VG vg01, LV srv", or whatever name you have chosen, the choose Use as. Setup a file system as normal selecting /srv as the mount point. Once done, select "Done setting up the partition".
-
Finally, select "Finish partitioning and write changes to disk". Then confirm the changes and continue with the rest of the installation.
There are some useful utilities to view information about LVM:
-
vgdisplay: shows information about Volume Groups.
-
lvdisplay: has information about Logical Volumes.
-
pvdisplay: similarly displays information about Physical Volumes.
Continuing with srv as an LVM volume example, this section covers adding a second hard disk, creating
a Physical Volume (PV), adding it to the volume group (VG), extending the logical volume srv
and finally extending the filesystem. This example assumes a second hard disk has been added to the system. This hard disk will be
named /dev/sdb
in our example. BEWARE: make sure you don't already have an existing /dev/sdb
before issuing the commands below. You could lose some data if you issue those commands on a non-empty disk. In our example we will
use the entire disk as a physical volume (you could choose to create partitions and use them as different physical volumes)
-
First, create the physical volume, in a terminal execute:
sudo pvcreate /dev/sdb
-
Now extend the Volume Group (VG):
sudo vgextend vg01 /dev/sdb
-
Use vgdisplay to find out the free physical extents - Free PE / size (the size you can allocate). We will assume a free size of 511 PE (equivalent to 2GB with a PE size of 4MB) and we will use the whole free space available. Use your own PE and/or free space.
The Logical Volume (LV) can now be extended by different methods, we will only see how to use the PE to extend the LV:
sudo lvextend /dev/vg01/srv -l +511
The -l option allows the LV to be extended using PE. The -L option allows the LV to be extended using Meg, Gig, Tera, etc bytes.
-
Even though you are supposed to be able to expand an ext3 or ext4 filesystem without unmounting it first, it may be a good pratice to unmount it anyway and check the filesystem, so that you don't mess up the day you want to reduce a logical volume (in that case unmounting first is compulsory).
The following commands are for an EXT3 or EXT4 filesystem. If you are using another filesystem there may be other utilities available.
sudo umount /srv sudo e2fsck -f /dev/vg01/srv
The -f option of e2fsck forces checking even if the system seems clean.
-
Finally, resize the filesystem:
sudo resize2fs /dev/vg01/srv
-
Now mount the partition and check its size.
mount /dev/vg01/srv /srv && df -h /srv
-
See the LVM HOWTO for more information.
-
Another good article is Managing Disk Space with LVM on O'Reilly's linuxdevcenter.com site.
-
For more information on fdisk see the fdisk man page.