Thanks for letting us know we're doing a good job!
| Jul | AUG | Sep |
| 25 | ||
| 2019 | 2020 | 2021 |
COLLECTED BY
Collection: Live Web Proxy Crawls
| Configuration | Use | Advantages | Disadvantages |
|---|---|---|---|
|
RAID 0 |
When I/O performance is more important than fault tolerance; for example, as in a heavily used database (where data replication is already set up separately). |
I/O is distributed across the volumes in a stripe. If you add a volume, you get the straight addition of throughput and IOPS. |
Performance of the stripe is limited to the worst performing volume in the set. Loss of a single volume results in a complete data loss for the array. |
|
RAID 1 |
When fault tolerance is more important than I/O performance; for example, as in a critical application. |
Safer from the standpoint of data durability. |
Does not provide a write performance improvement; requires more Amazon EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously. |
io1 volumes with 4,000
provisioned IOPS each will create a 1000 GiB RAID 0 array with an available bandwidth
of
8,000 IOPS and 1,000 MiB/s of throughput or a 500 GiB RAID 1 array with an available
bandwidth of 4,000 IOPS and 500 MiB/s of throughput.
This documentation provides basic RAID setup examples. For more information about
RAID
configuration, performance, and recovery, see the Linux RAID Wiki at https://raid.wiki.kernel.org/index.php/Linux_Raidnumber_of_volumes and the device names for each
volume in the array (such as /dev/xvdf) for
device_name. You can also substitute
MY_RAID with your own unique name for the array.
--level=0 option to stripe the array):
(RAID 1 only) To create a RAID 1 array, execute the following command (note the[ec2-user ~]$sudo mdadm --create --verbose /dev/md0 --level=0 --name=MY_RAID--raid-devices=number_of_volumesdevice_name1 device_name2
--level=1 option to mirror the array):
Allow time for the RAID array to initialize and synchronize. You can track the progress of these operations with the following command:[ec2-user ~]$sudo mdadm --create --verbose /dev/md0 --level=1 --name=MY_RAID--raid-devices=number_of_volumesdevice_name1 device_name2
The following is example output:[ec2-user ~]$sudo cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 xvdg[1] xvdf[0]
20955008 blocks super 1.2 [2/2] [UU]
[=========>...........] resync = 46.8% (9826112/20955008) finish=2.9min speed=63016K/sec
In general, you can display detailed information about your RAID array with
the following command:
The following is example output:[ec2-user ~]$sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 27 11:31:28 2016
Raid Level : raid1
Array Size : 20955008 (19.98 GiB 21.46 GB)
Used Dev Size : 20955008 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jun 27 11:37:02 2016
State : clean
...
...
...
Number Major Minor RaidDevice State
0 202 80 0 active sync /dev/sdf
1 202 96 1 active sync /dev/sdg
Create a file system on your RAID array, and give that file system a label to
use when you mount it later. For example, to create an ext4
file system with the label MY_RAID, execute the
following command:
Depending on the requirements of your application or the limitations of your operating system, you can use a different file system type, such as ext3 or XFS (consult your file system documentation for the corresponding file system creation command). To ensure that the RAID array is reassembled automatically on boot, create a configuration file to contain the RAID information:[ec2-user ~]$sudo mkfs.ext4 -LMY_RAID/dev/md0
[ec2-user ~]$sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
Create a mount point for your RAID array.[ec2-user ~]$sudo dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
Finally, mount the RAID device on the mount point that you created:[ec2-user ~]$sudo mkdir -p /mnt/raid
Your RAID device is now ready for use. (Optional) To mount this Amazon EBS volume on every system reboot, add an entry for the device to the[ec2-user ~]$sudo mount LABEL=MY_RAID/mnt/raid
/etc/fstab file.
Create a backup of your /etc/fstab file that you
can use if you accidentally destroy or delete this file while you are
editing it.
Open the[ec2-user ~]$sudo cp /etc/fstab /etc/fstab.orig
/etc/fstab file using your favorite text
editor, such as nanoorvim.
Comment out any lines starting with "UUID=" and, at the
end of the file, add a new line for your RAID volume using the following
format:
device_label mount_point file_system_type fs_mntops fs_freq fs_passno
The last three fields on this line are the file system mount options, the
dump frequency of the file system, and the order of file system checks
done at boot time. If you don't know what these values should be, then
use the values in the example below for them (defaults,nofail 0
2). For more information about
/etc/fstab entries, see the
fstab manual page (by entering man
fstab on the command line). For example, to mount the ext4
file system on the device with the label MY_RAID at
the mount point /mnt/raid, add the following entry
to /etc/fstab.
nofail mount option that allows the instance
to boot even if there are errors in mounting the volume. Debian
derivatives, such as Ubuntu, must also add the
nobootwait mount option.
After you've added the new entry toLABEL=MY_RAID /mnt/raid ext4 defaults,nofail 0 2
/etc/fstab,
you need to check that your entry works. Run the sudo mount
-a command to mount all file systems in
/etc/fstab.
If the previous command does not produce an error, then your[ec2-user ~]$sudo mount -a
/etc/fstab file is OK and your file system will
mount automatically at the next boot. If the command does produce any
errors, examine the errors and try to correct your
/etc/fstab.
/etc/fstab file can render a
system unbootable. Do not shut down a system that has errors in the
/etc/fstab file.
(Optional) If you are unsure how to correct
/etc/fstab errors, you can always restore your
backup /etc/fstab file with the following
command.
[ec2-user ~]$sudo mv /etc/fstab.orig /etc/fstab
Javascript is disabled or is unavailable in your
browser.
To use the AWS Documentation, Javascript must be
enabled. Please refer to your browser's Help pages for instructions.
Document Conventions
Initialize volumes
Benchmark EBS volumes
Thanks for letting us know we're doing a good job!
If you've got a moment, please tell us what we did right so we can do more of it.
Thanks for letting us know this page needs work. We're sorry we let you down.
If you've got a moment, please tell us how we can make the documentation better.