This section describes how to create and manage software RAIDs with the Enterprise Volume Management System (EVMS). EVMS supports only RAIDs 0, 1, 4, and 5 at this time. For RAID 6 and 10 solutions, see Chapter 7, Managing Software RAIDs 6 and 10 with mdadm.
A RAID combines multiple devices into a multi-disk array to provide resiliency in the storage device and to improve storage capacity and I/O performance. If a disk fails, some RAID levels keep data available in a degraded mode until the failed disk can be replaced and its content reconstructed.
A software RAID provides the same high availability that you find in a hardware RAID. The key operational differences are described in the following table:
Table 6.1. Comparison of Software RAIDs and Hardware RAIDs¶
The following table describes the advantages and disadvantages of the RAID levels supported by EVMS. The description assumes that the component devices reside on different disks and that each disk has its own dedicated I/O capability.
![]() | |
For information about creating complex or nested RAID devices with mdadm, see Chapter 7, Managing Software RAIDs 6 and 10 with mdadm. |
Table 6.2. RAID Levels Supported by EVMS¶
The following table compares the read and write performance for RAID devices.
Table 6.3. Read and Write Performance for RAIDs¶
The following table compares the disk fault tolerance for RAID devices.
Table 6.4. Fault Tolerance for RAIDs¶
In EVMS management tools, the following RAID configuration options are provided:
Table 6.5. Configuration Options in EVMS¶
Linux software RAID cannot be used underneath clustered file systems because it does not support concurrent activation. If you want RAID and OCFS2, you need the RAID to be handled by the storage subsystem.
![]() | |
Activating Linux software RAID devices concurrently on multiple servers can result in data corruption or inconsistencies. |
For efficient use of space and performance, the disks you use to create the RAID should have the same storage capacity. Typically, if component devices are not of identical storage capacity, then each member of the RAID uses only an amount of space equal to the capacity of the smallest member disk.
Version 2.3 and later of mdadm supports component devices up to 4 TB in size each. Earlier versions support component devices up to 2 TB in size.
![]() | |
If you have a local disk, external disk arrays, or SAN devices that are larger than the supported device size, use a third-party disk partitioner to carve the devices into smaller logical devices. |
You can combine up to 28 component devices to create the RAID array. The md RAID device you create can be up to the maximum device size supported by the file system you plan to use. For information about file system limits for SUSE Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide..
In general, each storage object included in the RAID should be from a different physical disk to maximize I/O performance and to achieve disk fault tolerance where supported by the RAID level you use. In addition, they should be of the same type (disks, segments, or regions).
Using component devices of differing speeds might introduce a bottleneck during periods of demanding I/O. The best performance can be achieved by using the same brand and models of disks and controllers in your hardware solution. If they are different, you should try to match disks and controllers with similar technologies, performance, and capacity. Use a low number of drives on each controller to maximize throughput.
![]() | |
As with any hardware solution, using the same brand and model introduces the risk of concurrent failures over the life of the product, so plan maintenance accordingly. |
The following table provides recommendations for the minimum and maximum number of storage objects to use when creating a software RAID:
Table 6.6. Recommended Number of Storage Objects to Use in the Software RAID¶
Connection fault tolerance can be achieved by having multiple connection paths to each storage object in the RAID. For more information about configuring multipath I/O support before configuring a software RAID, see Chapter 5, Managing Multipath I/O for Devices.
RAID 5 uses an algorithm to determine the layout of stripes and parity. The following table describes the algorithms.
Table 6.7. RAID 5 Algorithms¶
For information about the layout of stripes and parity with each of these algorithms, see Linux RAID-5 Algorithms.
The Multi-Disk (MD) plug-in supports creating software RAIDs 0 (striping), 1 (mirror), 4 (striping with dedicated parity), and 5 (striping with distributed parity). The MD plug-in to EVMS allows you to manage all of these MD features as “regions” with the Regions Manager.
The Device Mapper plug-in supports the following features in the EVMS MD Region Manager:
Multipath I/O: Connection fault tolerance and load balancing for connections between the server and disks where multiple paths are available. If you plan to use multipathing, you should configure MPIO for the devices that you plan to use in the RAID before configuring the RAID itself. For information, see Chapter 5, Managing Multipath I/O for Devices.
![]() | |
The EVMS interface manages multipathing under the MD Region Manager, which originally supported the md multipath functions. It uses the legacy md terminology in the interface and in naming of device nodes, but implements the storage objects with Device Mapper. |
Linear RAID: A linear concatenation of discontinuous areas of free space from the same or multiple storage devices. Areas can be of different sizes.
Snapshots: Snapshots of a file system at a particular point in time, even while the system is active, thereby allowing a consistent backup.
The Device Mapper driver is not started by default in the rescue system.
Open a terminal console, then log in as the root
user or equivalent.
Start the EVMS GUI by entering the following at the terminal console prompt:
evmsgui
If the disks have not been initialized, initialize them by adding the DOS Segment Manager now.
The following instructions assume you are initializing new disks. For information about initializing an existing disk or a disk moved from another system, see Section 4.2, “Initializing Disks”.
Repeat the following steps for each disk that you want to initialize:
Select
+ + .From the list, select the
, then click .Select the device, then click
to initialize it.If segments have not been created on the disks, create a segment on each disk that you plan to use in the RAID.
For x86 platforms, this step is optional if you treat the entire disk as one segment.
For IA-64 platforms, this step is necessary to make the
option available in the Regions Manager.For information about creating segments, see Section 4.4, “Creating Disk Segments (or Partitions)”.
Select
+ + to open the .Select the free space segment you want to use.
Specify the amount of space to use for the segment.
Specify the segment options, then click
.Create and configure a software RAID Device.
Select
+ + to open the Create Storage Region dialog box.![]() |
Specify the type of software RAID you want to create by selecting one of the following Region Managers, then click
.From the Storage Objects listed, select the ones to use for the RAID device.
![]() | |
The order of the objects in the RAID is implied by their order in the list. |
![]() |
Specify values for
by changing the following default settings as desired.For RAIDs 1, 4, or 5, optionally specify a device to use as the spare disk for the RAID. The default is none.
For RAIDs 0, 4, or 5, specify the chunk (stripe) size in KB. The default is 32 KB.
For RAIDs 4/5, specify
or (default).For RAID 5, specify the algorithm to use for striping and parity. The default is
.For information about these settings, see Section 6.1.5, “Configuration Options for RAIDs”.
![]() |
Click /dev/evms/md
directory.
The device is given a name such as md0
, so its EVMS mount location is /dev/evms/md/md0
.
Specify a human-readable label for the device.
Select
+ + .Select the device that you created in Step 5.
Specify a name for the device.
Use standard ASCII characters and naming conventions. Spaces are allowed.
Click
.Create a file system on the RAID device you created.
Select
+ + to view a list of file system modules.Select the type of file system you want to create, such as the following:
Select the RAID device you created in Step 5, such as /dev/evms/md/md0
.
Specify a name to use as the
, then click .The name must not contain space or it will fail to mount later.
Click
to create the file system.Mount the RAID device.
Select
+ + .Select the RAID device you created in Step 5, such as /dev/evms/md/md0
.
Specify the location where you want to mount the device, such as /home
.
Click
.Enable boot.evms
to activate EVMS automatically at reboot.
In YaST, select
+ .Select
.Select
.Select
.Select
.Edit the /etc/fstab
file to automount the RAID mount point created in Step 8.c, or you can mount the device manually from evmsgui.
This section explains how to expand a RAID by adding segments to it.
![]() | |
Before you can expand the size of a RAID device, you must deactivate it. |
In a RAID 1 device, each member segment contains its own copy of all of the data stored in the RAID. You can add a mirror to the RAID to increase redundancy. The segment must be at least the same size as the smallest member segment in the existing RAID 1 device. Any excess space in the segment is not used. Ideally, all member segments of a RAID 1 device are the same size.
If you have not set up a spare disk, do it now.
For information, see Section 6.4, “Adding or Removing a Spare Disk”.
Use the Activate Spare (activatespare
plug-in) function to add it to the RAID 1 device as a new mirror.
If the RAID region is clean and operating normally, the kernel driver adds the new object as a regular spare, and it acts as a hot standby for future failures. If the RAID region is currently degraded, the kernel driver immediately activates the new spare object and begins synchronizing the data and parity information.
The MD driver allows you to optionally designate a spare disk (device, segment, or region) for RAID 1, 4, and 5 devices. You can assign a spare disk when you create the RAID or at any time thereafter. The RAID can be active and in use when you add or remove the spare. The spare is activated for the RAID only on disk failure.
The advantage of specifying a spare disk for a RAID is that the system monitors the failure and begins recovery without human interaction. The disadvantage is that the space on the spare disk is not available until it is activated by a failed RAID.
As noted in Section 6.1.2, “Overview of RAID Levels”, RAIDs 1, 4, and 5 can tolerate at least one disk failure. Any given RAID can have one spare disk designated for it, but the spare itself can serve as the designated spare for one RAID, for multiple RAIDs, or for all arrays. The spare disk is a hot standby until it is needed. It is not an active member of any RAIDs where it is assigned as the spare disk until it is activated for that purpose.
If a spare disk is defined for the RAID, the RAID automatically deactivates the failed disk and activates the spare disk on disk failure. The MD driver then begins synchronizing mirrored data for a RAID 1 or reconstructing the missing data and parity information for RAIDs 4 and 5. The I/O performance remains in a degraded state until the failed disk’s data is fully remirrored or reconstructed.
Creating a spare-group name allows a single hot spare to service multiple RAID arrays. The spare-group name can be any character string, but must be uniquely named for the server. For mdadm to move spares from one array to another, the different arrays must be labelled with the same spare-group name in the configuration file.
For example, when mdadm detects that an array is missing a component device, it first checks to see if the array has a spare device. If no spare is available, mdadm looks in the array’s assigned spare-group for another array that has a full complement of working drives and a spare. It attempts to remove the spare from the working array and add it to the degraded array. If the removal succeeds but the adding fails, then the spare is added back to its source array.
When you create a RAID 1, 4, or 5 in EVMS, specify the Step 5.d in Section 6.2, “Creating and Configuring a Software RAID”.
in the dialog box. You can browse to select the available device, segment, or region that you want to be the RAID’s spare disk. For information, seeThe RAID 1, 4, or 5 device can be active and in use when you add a spare disk to it. If the RAID is operating normally, the specified disk is added as a spare and it acts as a hot standby for future failures. If the RAID is currently degraded because of a failed disk, the specified disk is added as a spare disk, then it is automatically activated as a replacement disk for the failed disk, and it begins synchronizing the data and parity information.
Prepare a disk, segment, or region to use as the replacement disk, just as you did for the component devices of the RAID device.
In EVMS, select the addspare
plug-in for the EVMS GUI).
Select the RAID device you want to manage from the list of Regions, then click
.Select the device to use as the spare disk.
Click
.The RAID 1, 4, or 5 device can be active and in use when you remove its spare disk.
RAIDs 1, 4, and 5 can survive a disk failure. A RAID 1 device survives if all but one mirrored array fails. Its read performance is degraded without the multiple data sources available, but its write performance might actually improve when it does not write to the failed mirrors. During the synchronization of the replacement disk, write and read performance are both degraded. A RAID 5 can survive a single disk failure at a time. A RAID 4 can survive a single disk failure at a time if the disk is not the parity disk.
Disks can fail for many reasons such as the following:
Disk crash
Disk pulled from the system
Drive cable removed or loose
I/O errors
When a disk fails, the RAID removes the failed disk from membership in the RAID, and operates in a degraded mode until the failed disk is replaced by a spare. Degraded mode is resolved for a single disk failure in one of the following ways:
Spare Exists: If the RAID has been assigned a spare disk, the MD driver automatically activates the spare disk as a member of the RAID, then the RAID begins synchronizing (RAID 1) or reconstructing (RAID 4 or 5) the missing data.
No Spare Exists: If the RAID does not have a spare disk, the RAID operates in degraded mode until you configure and add a spare. When you add the spare, the MD driver detects the RAID’s degraded mode, automatically activates the spare as a member of the RAID, then begins synchronizing (RAID 1) or reconstructing (RAID 4 or 5) the missing data.
On failure, md automatically removes the failed drive as a component device in the RAID array. To determine which device is a problem, use mdadm and look for the device that has been reported as “removed”.
Enter the following a a terminal console prompt
mdadm -D /dev/md1
Replace /dev/md1
with the actual path for your RAID.
For example, an mdadm report for a RAID 1 device consisting of /dev/sda2
and /dev/sdb2
might look like this:
blue6:~ # mdadm -D /dev/md1
/dev/md1:
Version : 00.90.03
Creation Time : Sun Jul 2 01:14:07 2006
Raid Level : raid1
Array Size : 180201024 (171.85 GiB 184.53 GB)
Device Size : 180201024 (171.85 GiB 184.53 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Aug 15 18:31:09 2006
State : clean, degraded
Active Devices : 1
Working Devices : 1 Failed Devices : 0
Spare Devices : 0
UUID : 8a9f3d46:3ec09d23:86e1ffbc:ee2d0dd8
Events : 0.174164
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 18 1 active sync /dev/sdb2
The “Total Devices : 1”, “Active Devices : 1”, and “Working Devices : 1” indicate that only one of the two devices is currently active. The RAID is operating in a “degraded” state.
The “Failed Devices : 0” might be confusing. This setting has a non-zero number only for that brief period where the md driver finds a problem on the drive and prepares to remove it from the RAID. When the failed drive is removed, it reads “0” again.
In the devices list at the end of the report, the device with the “removed” state for Device 0 indicates that the device has been removed from the software RAID definition, not that the device has been physically removed from the system. It does not specifically identify the failed device. However, the working device (or devices) are listed. Hopefully, you have a record of which devices were members of the RAID. By the process of elimination, the failed device is /dev/sda2
.
The “Spare Devices : 0” indicates that you do not have a spare assigned to the RAID. You must assign a spare device to the RAID so that it can be automatically added to the array and replace the failed device.
When a component device fails, the md driver replaces the failed device with a spare device assigned to the RAID. You can either keep a spare device assigned to the RAID as a hot standby to use as an automatic replacement, or assign a spare device to the RAID as needed.
![]() | |
Even if you correct the problem that caused the problem disk to fail, the RAID does not automatically accept it back into the array because it is a “faulty object” in the RAID and is no longer synchronized with the RAID. |
If a spare is available, md automatically removes the failed disk, replaces it with the spare disk, then begins to synchronize the data (for RAID 1) or reconstruct the data from parity (for RAIDs 4 or 5).
If a spare is not available, the RAID operates in degraded mode until you assign spare device to the RAID.
To assign a spare device to the RAID:
Prepare the disk as needed to match the other members of the RAID.
In EVMS, select the addspare
plug-in for the EVMS GUI).
Select the RAID device you want to manage from the list of Regions, then click
.Select the device to use as the spare disk.
Click
.The md driver automatically begins the replacement and reconstruction or synchronization process.
Monitor the status of the RAID to verify the process has begun.
For information about how monitor RAID status, see Section 6.6, “Monitoring Status for a RAID”.
Continue with Section 6.5.4, “Removing the Failed Disk”.
You can remove the failed disk at any time after it has been replaced with the spare disk. EVMS does not make the device available for other use until you remove it from the RAID. After you remove it, the disk appears in the
list in the EVMS GUI, where it can be used for any purpose.![]() | |
If you pull a disk or if it is totally unusable, EVMS no longer recognizes the failed disk as part of the RAID. |
The RAID device can be active and in use when you remove its faulty object.
The evmsgui) reports any software RAID devices that are defined and whether they are currently active.
tab in EVMS GUI (A summary of RAID and status information (active/not active) is available in the /proc/mdstat
file.
Open a terminal console, then log in as the root
user or equivalent.
View the /proc/mdstat
file by entering the following at the console prompt:
cat /proc/mdstat
Evaluate the information.
The following table shows an example output and how to interpret the information.
Status Information |
Description |
Interpretation | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
List of the RAIDs on the server by RAID label. |
You have two RAIDs defined with labels of | ||||||||||||
|
|
| ||||||||||||
35535360 blocks level 5, 128k chunk, algorithm 2 [5/4] [U_UUU] |
|
| ||||||||||||
unused devices: <none> |
All segments in the RAID are in use. |
There are no spare devices available on the server. |
To view the RAID status with the mdadm command, enter the following at a terminal prompt:
mdadm -D /dev/mdx
Replace mdx
with the RAID device number.
In the following example, only four of the five devices in the RAID are active (Raid Devices : 5
, Total Devices : 4
). When it was created, the component devices in the device were numbered 0 to 5 and are ordered according to their alphabetic appearance in the list where they were chosen, such as /dev/sdg1
, /dev/sdh1
, /dev/sdi1
, /dev/sdj1
, and /dev/sdk1
. From the pattern of filenames of the other devices, you determine that the device that was removed was named /dev/sdh1
.
/dev/md0: |
Version : 00.90.03 |
Creation Time : Sun Apr 16 11:37:05 2006 |
Raid Level : raid5 |
Array Size : 35535360 (33.89 GiB 36.39 GB) |
Device Size : 8883840 (8.47 GiB 9.10 GB) |
Raid Devices : 5 |
Total Devices : 4 |
Preferred Minor : 0 |
Persistence : Superblock is persistent |
Update Time : Mon Apr 17 05:50:44 2006 |
State : clean, degraded |
Active Devices : 4 |
Working Devices : 4 |
Failed Devices : 0 |
Spare Devices : 0 |
Layout : left-symmetric |
Chunk Size : 128K |
UUID : 2e686e87:1eb36d02:d3914df8:db197afe |
Events : 0.189 |
Number Major Minor RaidDevice State |
0 8 97 0 active sync /dev/sdg1 |
1 8 0 1 removed |
2 8 129 2 active sync /dev/sdi1 |
3 8 45 3 active sync /dev/sdj1 |
4 8 161 4 active sync /dev/sdk1 |
In the following mdadm report, only 4 of the 5 disks are active and in good condition (Active Devices : 4
, Working Devices : 5
). The failed disk was automatically detected and removed from the RAID (Failed Devices: 0
). The spare was activated as the replacement disk, and has assumed the diskname of the failed disk (/dev/sdh1
). The faulty object (the failed disk that was removed from the RAID) is not identified in the report. The RAID is running in degraded mode (State : clean, degraded, recovering
). The data is being rebuilt (spare rebuilding /dev/sdh1
), and the process is 3% complete (Rebuild Status : 3% complete
).
mdadm -D /dev/md0 |
/dev/md0: |
Version : 00.90.03 |
Creation Time : Sun Apr 16 11:37:05 2006 |
Raid Level : raid5 |
Array Size : 35535360 (33.89 GiB 36.39 GB) |
Device Size : 8883840 (8.47 GiB 9.10 GB) |
Raid Devices : 5 |
Total Devices : 5 |
Preferred Minor : 0 |
Persistence : Superblock is persistent |
Update Time : Mon Apr 17 05:50:44 2006 |
State : clean, degraded, recovering |
Active Devices : 4 |
Working Devices : 5 |
Failed Devices : 0 |
Spare Devices : 1 |
Layout : left-symmetric |
Chunk Size : 128K |
Rebuild Status : 3% complete |
UUID : 2e686e87:1eb36d02:d3914df8:db197afe |
Events : 0.189 |
Number Major Minor RaidDevice State |
0 8 97 0 active sync /dev/sdg1 |
1 8 113 1 spare rebuilding /dev/sdh1 |
2 8 129 2 active sync /dev/sdi1 |
3 8 145 3 active sync /dev/sdj1 |
4 8 161 4 active sync /dev/sdk1 |
You can follow the progress of the synchronization or reconstruction process by examining the /proc/mdstat
file.
You can control the speed of synchronization by setting parameters in the /proc/sys/dev/raid/speed_limit_min
and /proc/sys/dev/raid/speed_limit_max
files. To speed up the process, echo a larger number into the speed_limit_min
file.
You might want to configure the mdadm service to send an e-mail alert for software RAID events. Monitoring is only meaningful for RAIDs 1, 4, 5, 6, 10 or multipath arrays because only these have missing, spare, or failed drives to monitor. RAID 0 and Linear RAIDs do not provide fault tolerance so they have no interesting states to monitor.
The following table identifies RAID events and indicates which events trigger e-mail alerts. All events cause the program to run. The program is run with two or three arguments: the event name, the array device (such as /dev/md1
), and possibly a second device. For Fail, Fail Spare, and Spare Active, the second device is the relevant component device. For MoveSpare, the second device is the array that the spare was moved from.
Table 6.8. RAID Events in mdadm¶
RAID Event |
Trigger E-Mail Alert |
Description |
---|---|---|
Device Disappeared |
No |
An md array that was previously configured appears to no longer be configured. (syslog priority: Critical) If mdadm was told to monitor an array which is RAID0 or Linear, then it reports DeviceDisappeared with the extra information Wrong-Level. This is because RAID0 and Linear do not support the device-failed, hot-spare, and resynchronize operations that are monitored. |
Rebuild Started |
No |
An md array started reconstruction. (syslog priority: Warning) |
Rebuild NN |
No |
Where NN is 20, 40, 60, or 80. This indicates the percent completed for the rebuild. (syslog priority: Warning) |
Rebuild Finished |
No |
An md array that was rebuilding is no longer rebuilding, either because it finished normally or was aborted. (syslog priority: Warning) |
Fail |
Yes |
An active component device of an array has been marked as faulty. (syslog priority: Critical) |
Fail Spare |
Yes |
A spare component device that was being rebuilt to replace a faulty device has failed. (syslog priority: Critical) |
Spare Active |
No |
A spare component device that was being rebuilt to replace a faulty device has been successfully rebuilt and has been made active. (syslog priority: Info) |
New Array |
No |
A new md array has been detected in the |
Degraded Array |
Yes |
A newly noticed array appears to be degraded. This message is not generated when mdadm notices a drive failure that causes degradation. It is generated only when mdadm notices that an array is degraded when it first sees the array. (syslog priority: Critical) |
Move Spare |
No |
A spare drive has been moved from one array in a spare group to another to allow a failed drive to be replaced. (syslog priority: Info) |
Spares Missing |
Yes |
The |
Test Message |
Yes |
An array was found at startup, and the |
To configure an e-mail alert:
At a terminal console, log in as the root
user.
Edit the /etc/mdadm/mdadm.conf
file to add your e-mail address for receiving alerts. For example, specify the MAILADDR value (using your own e-mail address, of course):
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=1c661ae4:818165c3:3f7a4661:af475fda
devices=/dev/sdb3,/dev/sdc3
MAILADDR yourname@example.com
The MAILADDR line gives an e-mail address that alerts should be sent to when mdadm is running in --monitor mode with the --scan option. There should be only one MAILADDR line in mdadm.conf
, and it should have only one address.
Start mdadm monitoring by entering the following at the terminal console prompt:
mdadm --monitor --mail=yourname@example.com
--delay=1800
/dev/md0
The --monitor option causes mdadm to periodically poll a number of md arrays and to report on any events noticed. mdadm never exits once it decides that there are arrays to be checked, so it should normally be run in the background.
In addition to reporting events in this mode, mdadm might move a spare drive from one array to another if they are in the same spare-group and if the destination array has a failed drive but no spares.
Listing the devices to monitor is optional. If any devices are listed on the command line, mdadm monitors only those devices. Otherwise, all arrays listed in the configuration file are monitored. Further, if --scan
option is added in the command, then any other md
devices that appear in /proc/mdstat
are also monitored.
For more information about using mdadm, see the mdadm(8) and mdadm.conf(5) man pages.
To configure the /etc/init.d/mdadmd
service as a script:
suse:~ # egrep 'MAIL|RAIDDEVICE' /etc/sysconfig/mdadm
MDADM_MAIL="yourname@example.com"
MDADM_RAIDDEVICES="/dev/md0"
MDADM_SEND_MAIL_ON_START=no
suse:~ # chkconfig mdadmd --list
mdadmd 0:off 1:off 2:off 3:on 4:off 5:on 6:off
If you want to remove the prior multipath settings, deactivate the RAID, delete the data on the RAID, and release all resources used by the RAID, do the following:
If you want to keep the data stored on the software RAID device, make sure to back up the data to alternate media, using your normal backup procedures. Make sure the backup is good before proceeding.
Open a terminal console prompt as the root
user or equivalent. Use this console to enter the commands described in the remaining steps.
Dismount the software RAID device by entering
umount <raid-device>
Stop the RAID device and its component devices by entering
mdadm --stop <raid-device>
mdadm --stop <member-devices>
For more information about using mdadm, please see the mdadm(8) man page.
Delete all data on the disk by literally overwriting the entire device with zeroes. Enter
mdadm --misc --zero-superblock <member-devices>
You must now reinitialize the disks for other uses, just as you would when adding a new disk to your system.