This section describes how to install, configure, and manage a device-level software RAID 1 across a network using DRBD (Distributed Replicated Block Device) for Linux.
DRBD allows you to create a mirror of two block devices that are located at two different sites across an IP network. When used with HeartBeat 2 (HB2), DRBD supports distributed high-availability Linux clusters.
![]() | |
The data traffic between mirrors is not encrypted. For secure data exchange, you should deploy a virtual private network (VPN) solution for the connection. |
Data on the primary device is replicated to the secondary device in a way that ensures that both copies of the data are always identical.
By default, DRBD uses the TCP port 7788 for communications between DRBD nodes. By default, DRBD typically uses the TCP port 7780 and higher for communication between DRBD nodes. Each DRBD resource users a different port. Make sure that your firewall does not prevent communication on the configured ports.
The open source version of DRBD supports up to 4 TB as maximal overall size of all devices. If you need a device larger than 4 TB, you can use DRBD+, which is commercially available from Linbit.
You must set up the DRBD devices before creating file systems on them. Everything to do with user data should be done solely via the /dev/drbd<n>
device, not on the raw device, because DRBD uses the last 128 MB of the raw device for metadata if you specify internal
for the meta-disk
setting.
![]() | |
Make sure to create file systems only on the |
For example, if the raw device is 1024 MB in size, the DRBD device has only 896 MB available for data, with 128 MB hidden and reserved for the metadata. Any attempt to access the space between 896 MB and 1024 MB fails because it is not available for user data.
Install the High Availability (HA) pattern on both SUSE Linux Enterprise Servers in your networked cluster. Installing HA also installs the drbd program files.
Log in as the root
user or equivalent, then open YaST.
Choose
+Change the filter to
.Under
, select .Click
.Install the drbd kernel modules on both servers.
Log in as the root
user or equivalent, then open YaST.
Choose
+Change the filter to
.Type drbd
, then click .
Select all of the drbd-kmp-*
packages.
Click
.DRDB is controlled by settings in the /etc/drbd.conf
configuration file. The file contents should be identical on both nodes. A sample configuration file is located in the /usr/share/doc/packages/drbd
folder. Options are also described in the drbd.conf(5)man page.
![]() | |
Beginning with DRBD 8.3 the configuration file is split into separate files, located under the directory |
Only one server can mount a DRBD partition at a time. On node 1, the assigned disk (such as /dev/sdc
) is mounted as primary and mirrored to the same device (/dev/sdc
) on node 2.
![]() | |
The following procedure uses the server names node 1 and node 2, and the cluster resource name r0. It sets up node 1 as the primary node. Make sure to modify the instructions to use your own node and file names. |
Log in as the root
user or equivalent on each node.
Open the /etc/drbd.conf
file on the primary node (node1
) in a text editor, modify the following parameters in the on hostname {}
sections, then save the file.
on: Specify the hostname for each node, such as node1
and node2
.
device: Specify the assigned /dev/drbd<n>
device, such as /dev/drdb0
.
disk: Specify the assigned disk (such as //devsdc
) or partition (such as /dev/sdc7
) to use for the /dev/drbd<n>
device.
address: Specify the IP address of each node and the port number (default is 7788) to use for communications between the nodes. Each resource needs an individual port, usually beginning with port 7780. The port must be opened in the firewall on each node.
metadisk: Specify internal to use the last 128 megabytes of the device, or specify an external device to use.
All of these options are explained in the examples in the /usr/share/doc/packages/drbd/drbd.conf
file and in the man page of drbd.conf(5).
For example, the following is a sample DRBD resource setup that defines /dev/drbd1
:
global { usage-count ask; } common { protocol C; } resource r0 { on node1 { device /dev/drbd0; disk /dev/sdb; address 192.168.1.101:7788; meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/sdb; address 192.168.1.102:7788; meta-disk internal; } }
Verify the syntax of your configuration file by entering
drbdadm dump all
Copy the /etc/drbd.conf
file to the /etc/drbd.conf
location on the secondary server (node 2).
scp /etc/drbd.conf <node 2> /etc
Initialize and start the drbd service on both systems by entering the following commands on each node:
drbdadm --ignore-sanity-checks create-md r0 rcdrbd start
Configure node1
as the primary node by entering the following on node1
:
drbdsetup /dev/drbd0 primary --do-what-I-say
![]() | |
The --do-what-i-say option has been renamed to --overwrite-data-of-peer in the recent versions of DRBD. |
Check the DRBD service status by entering the following on each node:
rcdrbd status
Before proceeding, wait until the block devices on both nodes are fully synchronized. Repeat the rcdrbd status command to follow the synchronization progress.
After the block devices on both nodes are fully synchronized, format the DRBD device on the primary with a file system such as reiserfs. Any Linux file system can be used. For example, enter
mkfs.reiserfs -f /dev/drbd0
![]() | |
Always use the |
If the install and configuration procedures worked as expected, you are ready to run a basic test of the drbd functionality. This test also helps with understanding how the software works.
Test the DRBD service on node 1.
Open a terminal console, then log in as the root
user or equivalent.
Create a mount point on node 1, such as /r0mount
, by entering
mkdir /r0mount
Mount the drbd device by entering
mount -o rw /dev/drbd0 /r0mount
Create a file from the primary node by entering
touch /r0mount/from_node1
Test the DRBD service on node 2.
Open a terminal console, then log in as the root
user or equivalent.
Dismount the disk on node 1 by typing the following command on node 1:
umount /r0mount
Downgrade the DRBD service on node 1 by typing the following command on node 1:
drbdadm secondary r0
On node 2, promote the DRBD service to primary by entering
drbdadm primary r0
On node 2, check to see if node 2 is primary by entering
rcdrbd status
On node 2, create a mount point such as /r0mount
, by entering
mkdir /r0mount
On node 2, mount the DRBD device by entering
mount -o rw /dev/drbd0 /r0mount
Verify that the file you created on node 1 in Step 1.d is viewable by entering
ls /r0mount
The /r0mount/from_node1
file should be listed.
If the service is working on both nodes, the DRBD setup is complete.
Set up node 1 as the primary again.
Dismount the disk on node 2 by typing the following command on node 2:
umount /r0mount
Downgrade the DRBD service on node 2 by typing the following command on node 2:
drbdadm secondary r0
On node 1, promote the DRBD service to primary by entering
drbdadm primary r0
On node 1, check to see if node 1 is primary by entering
service drbd status
To get the service to automatically start and fail over if the server has a problem, you can set up DRBD as a high availability service with HeartBeat 2.
For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide on the Novell Documentation Web site for SUSE Linux Enterprise Server 10.
Use the following to troubleshoot problems with your DRBD setup:
If the initial drbd setup does not work as expected, there is probably something wrong with your configuration.
To get information about the configuration:
Open a terminal console, then log in as the root
user or equivalent.
Test the configuration file by running drbdadm with the -d option. Enter
drbdadm -d adjust r0
In a dry run of the adjust option, drbdadm compares the actual configuration of the DRBD resource with your DRBD configuration file, but it does not execute the calls. Review the output to make sure you know the source and cause of any errors.
If there are errors in the drbd.conf
file, correct them before continuing.
If the partitions and settings are correct, run drbdadm again without the -d option. Enter
drbdadm adjust r0
This applies the configuration file to the DRBD resource.
Please note that for DRBD, hostnames are case sensitive and therefore Node0
would be a different host than node0
.
If your system is unable to connect to the peer, this also may be a problem of a local firewall. By default, DRBD uses the TCP port 7788 to access the other node. Make sure that this port is accessible on both nodes.
The --do-what-i-say option has been renamed to --overwrite-data-of-peer in the recent versions of DRBD.
In cases when DRBD does not know which of the real devices holds the latest data, it changes to a split brain condition. In this case, the respective DRBD subsystems come up as secondary and do not connect to each other. The following message is written to /var/log/messages
:
Split-Brain detected, dropping connection!
To resolve this situation, the node with the data to be discarded should be assigned as secondary.
The following open-source resources are available for DRBD:
Find a commented example configuration for DRBD at /usr/share/doc/packages/drbd/drbd.conf
.
The following man pages for DRBD are available in the distribution:
drbd(8) |
drbddisk(8) |
drbdsetup(8) |
drbdadm(8) |
drbd.conf(5) |
DRBD references at the Linux High-Availability Project by the Linux High-Availability Project
For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide on the Novell Documentation Web site for SUSE Linux Enterprise Server 10.