For the last five years, server and hardware configurations have been consistent. Be it in an application server, storage array or SaaS backend, how data center managers have built their systems hasn’t changed much. Even with the need to increase the use of edge computing closer to the customer – both for compute and storage – most edge networks mirror existing hardware configurations using SATA or SAS SSDs.
Only in rare instances, where meeting five or six-nine SLAs for mission-critical applications is a must, have we seen dramatic changes in hardware configurations. In these data centers, redundancy is key, and many data centers manage their storage platform by predominantly switching to NVME with enterprise-grade SSDs. NVMe drives are also equipped with large DRAM caches to deliver QoS performance (long-term performance stability).
This begs the question: If more data centers are moving to NVMe to ensure uptime, should I upgrade my servers?
The answer is far more complex than a simple yes or no, and in fact opens a host of other questions to address.
Upgrading a simple 1U server or 10U rack from SATA or SAS is limited by the availability of connections. Most systems that implement SATA or SAS SSDs connect using hardware-based RAID controllers. But NVMe utilizes the PCI-Express ports, which inherently provides faster transfer speeds, and then leverages software-defined RAID profiles.
Just a year ago, most customers were locked on SATA with no plans to move to NVMe. Even the largest tier 1 providers haven’t completely made the switch, having a 50/50 percent mix of SATA and NVMe. That’s because NVMe requires more of a technical overhaul.
Not all existing servers have enough PCIe ports to support a large NVMe deployment, and most data centers don’t change their servers as fast as they change their storage arrays. Simply put, if it’s working and providing the amount of performance needed for today’s operations, is there a need to switch?
If so, here are some things to consider for the next build:
How will the change impact your redundancy practices?
Switching to a Software Defined Storage (SDS) model presents the user with a new way of managing redundancy and controlling physical devices. In some cases, moving from a hardware-controlled storage system to SDS may require certain applications to need to change down to the kernel level to maintain consistent performance. Furthermore, SDS platforms will require users to think differently about how they deploy configure their storage for redundancy and performance.
What are the existing pain points to your architecture, and will NVMe solve it?
Some issues may not be a data transfer issue, but really a read/write profile or simply not using enterprise-grade drives. Many drives today have high-performance specifications on their data sheets but don’t address long-term consistency or predictability of performance. That’s usually because they tout peak-performance capabilities rather than steady state performance profiles.