This site uses cookies to provide enhanced features and functionality. By using the site, you are consenting to this. We value your privacy and data security. Please review our Cookie Policy and Privacy Policy, as both have recently been updated.
RAID Configurations & Recommendations

OVERVIEW

For data storage, RAID (Redundant Array of Independent Drives) is frequently utilized for data protection to guard against storage media failures. RAID and other protection methods often vary in the way they are implemented to meet underlying physical drive requirements. Data protection policy is often application specific, involving tradeoffs between performance, capacity, resilience, and recovery time. This document addresses RAID data protection options for a Kingston DCP1000 NVMe device in both Linux and Windows environments.

BACKGROUND

PCIe cards are becoming a common option to deploy high performance PCIe SSD storage into system architectures. Many system designers require RAID or other data protection mechanisms on these AIC (Add-In-Card) storage solutions. This often necessitates the deployment of multiple AICs so that data protection schemes can be implemented. However, deploying multiple AICs may be undesirable due to power, cost, or space constraints.

The DCP1000 NVMe AIC solves this issue because it contains multiple SSDs within a single NVMe drive. These SSDs present themselves as individual NVMe drives as wells, enabling various data protection schemes - i.e. software (SW) RAID- to be implemented on top of an single/individual card as well as across multiple AICs.

Table 1: Data Protection Examples (Source: Wikipedia)

COMMON CONFIGURATIONS

When a DCP1000 NVMe AIC is deployed into a system, it will appear as 4x individual Physical SSDs. Below are four common setup configurations when deploying multi-drive AICs.

Configuration #1: JBOD

Some applications will perform data protection directly or can tolerate loss of data thus eliminating the need for RAID at the AIC level. In a JBOF J a Bunch Of Flash) environment, no additional setup needs to be done. The DCP1000 drive will present as 4x independent SSD drives and the application can use each drive without any RAID scheme, if desired. The JBOF configuration will still provide end-to-end data path protection, but it will not protect against a media failure of any drive itself. A JBOF setup will provide the maximum performance and capacity for each of the four individual drives inside the DCP1000.

Configuration #2: Linux SW RAID

Most Operating Systems (OSs) like Linux have built-in methods for SW RAID. When the DCP1000 drive is deployed into a system, it will appear as 4 individual SSD drives. OS level software (SW) RAID can be utilized to provide striping or data protection on these devices. Typical RAID schemes like RAID0, 1, 5, 10, etc. are all supported by a single DCP1000 drive, or can be used across multiple DCP1000 drives installed in a system. The 4 drives on the single DCP1000 AIC can be configured as a desired single namespace using SW RAID.

Table 2: Example RAID-0 Configuration in Linux
Details
OS Linux – CentOS 7.2
RAID Examples RAID-0 (Striping of 4 devices) – 256K Chunk Size
Sample Command mdadm --create /dev/md0 --level=raid0 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --chunk=256K
Table 4: Example RAID-10 Configuration in Linux
Details
OS Linux – CentOS 7.2
RAID Examples RAID-10 (Mirroring + Striping of 4 devices) – 64K Chunk Size
Sample Command mdadm --create /dev/md0 --level=raid10 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --chunk=64K

Based on internal analysis, several important outcomes were enforced:
1) In-box RAID functions for NVMe; RAID 0, 1, 5, 10 were also verified
2) Capacity scaling was as expected (per RAID scheme tested)
3) Performance scaling was as expected: 80%-95% of JBOF performance

Table 3: Example RAID-1 Configuration in Linux
Details
OS Linux – CentOS 7.2
RAID Examples RAID-1 (Mirroring of 2 devices) – 256K Chunk Size
Sample Command mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1 --chunk=256K
(md1 can be set up using the remaining 2 drives)
Table 5: Example RAID-5 Configuration in Linux
Details
OS Linux – CentOS 7.2
RAID Examples RAID-5 (Single fault tolerance across 4 devices) – 256K Chunk Size
Sample Command mdadm --create /dev/md0 --level=raid5 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --chunk=256K
Configuration #3: Windows SW RAID

Windows environments also have built-in methods for supporting RAID. When the DCP1000 AIC is deployed into the Windows system, it will appear as 4 individual drives. Windows SW RAID can be used to provide data protection on these drives. Typical implementation methods such as Disk Management simple striping or mirroring can be employed. The Storage Spaces volume manager can also be used. In-box support for Windows NVMe has been confirmed on Win8.1, Win10, Win2012 Server R2, and Win2016 Server.

Summary

The industry continues to find innovative methods for providing data protection against storage failure. Implementing data protection at the software layer (ex. SW RAID) allows for more flexible methods of deployment and enables designers to better match the needs of the application that is being serviced.

DCP1000 NVMe SSDs support multiple drives on a single AIC, and host-level SW RAID can be leveraged to implement the correct data protection method for the application being deployed. By enabling data protection on a single AIC device can dramatically reduce cost and complexity of the data center.

        Back To Top