This section describes how to create software RAID 6 and 10 devices, using the Multiple Devices Administration (mdadm(8)) tool. You can also use mdadm
to create RAIDs 0, 1, 4, and 5. The mdadm tool provides the functionality of legacy programs mdtools and raidtools.
RAID 6 is essentially an extension of RAID 5 that allows for additional fault tolerance by using a second independent distributed parity scheme (dual parity). Even if one of the hard disk drives fails during the data recovery process, the system continues to be operational, with no data loss.
RAID6 provides for extremely high data fault tolerance by sustaining multiple simultaneous drive failures. It handles the loss of any two devices without data loss. Accordingly, it requires N+2 drives to store N drives worth of data. It requires a minimum of 4 devices.
The performance for RAID 6 is slightly lower but comparable to RAID 5 in normal mode and single disk failure mode. It is very slow in dual disk failure mode.
Table 7.1. Comparison of RAID 5 and RAID 6¶
The procedure in this section creates a RAID 6 device /dev/md0
with four devices: /dev/sda1
, /dev/sdb1
, /dev/sdc1
, and /dev/sdd1
. Make sure to modify the procedure to use your actual device nodes.
Open a terminal console, then log in as the root
user or equivalent.
Create a RAID 6 device. At the command prompt, enter
mdadm --create /dev/md0 --run --level=raid6 --chunk=128 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdc1 /dev/sdd1
The default chunk size is 64 (KB).
Create a file system on the RAID 6 device /dev/md0
, such as a Reiser file system (reiserfs). For example, at the command prompt, enter
mkfs.reiserfs /dev/md0
Modify the command if you want to use a different file system.
Edit the /etc/mdadm.conf
file to add entries for the component devices and the RAID device /dev/md0
.
Edit the /etc/fstab
file to add an entry for the RAID 6 device /dev/md0
.
Reboot the server.
The RAID 6 device is mounted to /local
.
(Optional) Add a hot spare to service the RAID array. For example, at the command prompt enter:
mdadm /dev/md0 -a /dev/sde1
A nested RAID device consists of a RAID array that uses another RAID array as its basic element, instead of using physical disks. The goal of this configuration is to improve the performance and fault tolerance of the RAID.
Linux supports nesting of RAID 1 (mirroring) and RAID 0 (striping) arrays. Generally, this combination is referred to as RAID 10. To distinguish the order of the nesting, this document uses the following terminology:
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability.
Table 7.2. RAID Levels Supported in EVMS¶
A nested RAID 1+0 is built by creating two or more RAID 1 (mirror) devices, then using them as component devices in a RAID 0.
![]() | |
If you need to manage multiple connections to the devices, you must configure multipath I/O before configuring the RAID devices. For information, see Chapter 5, Managing Multipath I/O for Devices. |
The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices.
Table 7.3. Scenario for Creating a RAID 10 (1+0) by Nesting ¶
Open a terminal console, then log in as the root user or equivalent.
Create 2 software RAID 1 devices, using two different devices for each RAID 1 device. At the command prompt, enter these two commands:
mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create /dev/md1 --run --level=1 --raid-devices=2 /dev/sdd1 /dev/sde1
Create the nested RAID 1+0 device. At the command prompt, enter the following command using the software RAID 1 devices you created in Step 2:
mdadm --create /dev/md2 --run --level=0 --chunk=64 --raid-devices=2 /dev/md0 /dev/md1
The default chunk size is 64 KB.
Create a file system on the RAID 1+0 device /dev/md2
, such as a Reiser file system (reiserfs). For example, at the command prompt, enter
mkfs.reiserfs /dev/md2
Modify the command if you want to use a different file system.
Edit the /etc/mdadm.conf
file to add entries for the component devices and the RAID device /dev/md2
.
Edit the /etc/fstab
file to add an entry for the RAID 1+0 device /dev/md2
.
Reboot the server.
The RAID 1+0 device is mounted to /local
.
(Optional) Add hot spares to service the underlying RAID 1 mirrors.
For information, see Section 6.4, “Adding or Removing a Spare Disk”.
A nested RAID 0+1 is built by creating two to four RAID 0 (striping) devices, then mirroring them as component devices in a RAID 1.
![]() | |
If you need to manage multiple connections to the devices, you must configure multipath I/O before configuring the RAID devices. For information, see Chapter 5, Managing Multipath I/O for Devices. |
In this configuration, spare devices cannot be specified for the underlying RAID 0 devices because RAID 0 cannot tolerate a device loss. If a device fails on one side of the mirror, you must create a replacement RAID 0 device, than add it into the mirror.
The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices.
Table 7.4. Scenario for Creating a RAID 10 (0+1) by Nesting ¶
Open a terminal console, then log in as the root user or equivalent.
Create 2 software RAID 0 devices, using two different devices for each RAID 0 device. At the command prompt, enter these two commands:
mdadm --create /dev/md0 --run --level=0 --chunk=64 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm --create /dev/md1 --run --level=0 --chunk=64 --raid-devices=2 /dev/sdd1 /dev/sde1
The default chunk size is 64 KB.
Create the nested RAID 0+1 device. At the command prompt, enter the following command using the software RAID 0 devices you created in Step 2:
mdadm --create /dev/md2 --run --level=1 --raid-devices=2 /dev/md0 /dev/md1
Create a file system on the RAID 0+1 device /dev/md2
, such as a Reiser file system (reiserfs). For example, at the command prompt, enter
mkfs.reiserfs /dev/md2
Modify the command if you want to use a different file system.
Edit the /etc/mdadm.conf
file to add entries for the component devices and the RAID device /dev/md2
.
Edit the /etc/fstab
file to add an entry for the RAID 0+1 device /dev/md2
.
Reboot the server.
The RAID 0+1 device is mounted to /local
.
In mdadm, the RAID10 level creates a single complex software RAID that combines features of both RAID 0 (striping) and RAID 1 (mirroring). Multiple copies of all data blocks are arranged on multiple drives following a striping discipline. Component devices should be the same size.
The complex RAID 10 is similar in purpose to a nested RAID 10 (1+0), but differs in the following ways:
Table 7.5. Complex vs. Nested RAID 10¶
When configuring a RAID10-level array, you must specify the number of replicas of each data block that are required. The default number of replicas is 2, but the value can be 2 to the number of devices in the array.
You must use at least as many component devices as the number of replicas you specify. However, number of component devices in a RAID10-level array does not need to be a multiple of the number of replicas of each data block. The effective storage size is the number of devices divided by the number of replicas.
For example, if you specify 2 replicas for an array created with 5 component devices, a copy of each block is stored on two different devices. The effective storage size for one copy of all data is 5/2 or 2.5 times the size of a component device.
With the near layout, copies of a block of data are striped near each other on different component devices. That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID10. For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device.
The near layout for the mdadm RAID10 yields read and write performance similar to RAID 0 over half the number of drives.
Near layout with an even number of disks and two replicas:
sda1 sdb1 sdc1 sde1 |
0 0 1 1 |
2 2 3 3 |
4 4 5 5 |
6 6 7 7 |
8 8 9 9 |
Near layout with an odd number of disks and two replicas:
sda1 sdb1 sdc1 sde1 sdf1 |
0 0 1 1 2 |
2 3 3 4 4 |
5 5 6 6 7 |
7 8 8 9 9 |
10 10 11 11 12 |
The far layout stripes data over the early part of all drives, then stripes a second copy of the data over the later part of all drives, making sure that all copies of a block are on different drives. The second set of values start halfway through the component drives.
With a far layout, the read performance of the mdadm RAID10 is similar to a RAID 0 over the full number of drives, but write performance is substantially slower than a RAID 0 because there is more seeking of the drive heads. It is best used for read-intensive operations such as for read-only file servers.
Far layout with an even number of disks and two replicas:
sda1 sdb1 sdc1 sde1 |
0 1 2 3 |
4 5 6 7 |
. . . |
3 0 1 3 |
7 4 5 6 |
Far layout with an odd number of disks and two replicas:
sda1 sdb1 sdc1 sde1 sdf1 |
0 1 2 3 4 |
5 6 7 8 9 |
. . . |
4 0 1 2 3 |
9 5 6 7 8 |
The RAID10-level option for mdadm creates a RAID 10 device without nesting. For information about the RAID10-level, see Section 7.3, “Creating a Complex RAID 10 with mdadm”.
The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices.
Table 7.6. Scenario for Creating a RAID 10 Using the mdadm RAID10 Option ¶
In YaST, create a 0xFD Linux RAID partition on the devices you want to use in the RAID, such as /dev/sdf1
, /dev/sdg1
, /dev/sdh1
, and /dev/sdi1
.
Open a terminal console, then log in as the root user or equivalent.
Create a RAID 10 command. At the command prompt, enter (all on the same line):
mdadm --create /dev/md3 --run --level=10 --chunk=4 --raid-devices=4 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
Create a Reiser file system on the RAID 10 device /dev/md3
. At the command prompt, enter
mkfs.reiserfs /dev/md3
Edit the /etc/mdadm.conf
file to add entries for the component devices and the RAID device /dev/md3
. For example:
DEVICE /dev/md3
Edit the /etc/fstab
file to add an entry for the RAID 10 device /dev/md3
.
Reboot the server.
The RAID10 device is mounted to /raid10
.
A degraded array is one in which some devices are missing. Degraded arrays are supported only for RAID 1, RAID 4, RAID 5, and RAID 6. These RAID types are designed to withstand some missing devices as part of their fault-tolerance features. Typically, degraded arrays occur when a device fails. It is possible to create a degraded array on purpose.
RAID Type |
Allowable Number of Slots Missing | |
---|---|---|
RAID 1 |
All but one device | |
RAID 4 |
One slot | |
RAID 5 |
One slot | |
RAID 6 |
One or two slots |
To create a degraded array in which some devices are missing, simply give the word missing
in place of a device name. This causes mdadm to leave the corresponding slot in the array empty.
When creating a RAID 5 array, mdadm automatically creates a degraded array with an extra spare drive. This is because building the spare into a degraded array is generally faster than resynchronizing the parity on a non-degraded, but not clean, array. You can override this feature with the --force
option.
Creating a degraded array might be useful if you want create a RAID, but one of the devices you want to use already has data on it. In that case, you create a degraded array with other devices, copy data from the in-use device to the RAID that is running in degraded mode, add the device into the RAID, then wait while the RAID is rebuilt so that the data is now across all devices. An example of this process is given in the following procedure:
Create a degraded RAID 1 device /dev/md0
, using one single drive /dev/sd1
, enter the following at the command prompt:
mdadm --create /dev/md0 -l 1 -n 2 /dev/sda1 missing
The device should be the same size or larger than the device you plan to add to it.
If the device you want to add to the mirror contains data that you want to move to the RAID array, copy it now to the RAID array while it is running in degraded mode.
Add a device to the mirror. For example, to add /dev/sdb1
to the RAID, enter the following at the command prompt:
mdadm /dev/md0 -a /dev/sdb1
You can add only one device at a time. You must wait for the kernel to build the mirror and bring it fully online before you add another mirror.
Monitor the build progress by entering the following at the command prompt:
cat /proc/mdstat
To see the rebuild progress while being refreshed every second, enter
watch -n 1 cat /proc/mdstat