Futuristic 3D graphic of glowing geometric tunnel one point perspective

Driving NVMe Into the Future

At the start of a new decade, there’s been a lot of buzz about the future of NVMe in the enterprise and Cloud infrastructure. From talk about NVMe over Fabrics to new form factors and even PCIe 5.0, a vast amount of innovation is set to be unveiled this year.

The adoption of NVMe is just taking off despite the atmosphere surrounding new inventions, new product releases and novel approaches to implementation. While roll-out might be slower than analysts and journalists expect, we highlight seven of the predictions of what will drive NVMe adoption this year.

1. Efficiency Will Still Be King

While most data center managers are focused on compute – the fact that NVMe can make things run more efficiently will force many to begin the switch. By having more available compute resources and faster transfer speeds, data centers can do more with less which makes the switch more attractive.

2. Adoption Will Increase with More Off-The-Rack Solutions

Not every data center is created equal. Hyperscalers have enormous budgets to create custom components and don’t have to rely on off-the-shelf components to have the best or fastest products. As more manufacturers begin rolling out affordable NVMe storage solutions, we’ll start to see more tier one and tier two providers migrating. With the availability of commercial products, there’s an amount of market validation and testing that has already proven its worth.

3. RAID Methods are Going to Change

Server racks with binary numbers floating between them and a bright reflection in the background

In theory, implementing NVMe unlocks the storage device from the hardware controller and gives you the performance far above what you could get with SATA and SAS. But implementation opens a whole new can of worms when architectures have been reliant on hardware-based RAID controllers and redundancy practices. HW based RAID controller manufacturers will need to adapt to the emergence of NVMe and offer solutions to connect to existing U.2 server backplanes to support HW based NVMe RAID solutions. There are already a few RAID cards on the market that support NVMe, but the market is still new. With the HW based RAID in elementary stages of development, and organizations make the switch to NVMe, architectural design decisions will have to be considered as they will need to explore how they will still meet their high-availability practices, whether it is through SW based HCI solutions like vSAN, Ceph, linux SW based RAID or lvm mirroring, and application based high-availability replication like SQL always on or Oracle ASM mirroring. One can argue that these SW based design decisions should still exist with HW based RAID controllers since the latter only protects against a single point of failure.

4. NVMe over Fabrics Will Be A Big Deal

Abstract image of buildings with light paths on a road to represent data pathways

NVMe-oF enables centralized, shared access of NVMe devices over a specialized network (FC/RDMA/TCP), enabling access to the network-linked drive as if it were local to the client servers. The benefits of centralized storage management of NVMe include simplified management, better capacity utilization and easier elimination of single point of failures. The NVMe-oF spec calls for either Fiber channel, RDMA or TCP fabrics. Fiber channel protocol (FCP) has been the leading enterprise storage transport technology since the mid-1990s, used for transporting SCSI packets over fiber channel networks, so it became pivotal for NVM express to define the new "FC-NVMe" protocol and make it possible to transport both SCSI and NVMe traffic over fiber channel, enabling existing users of FCP to upgrade to FC-NVMe. RDMA (Remote Direct Memory Access) is also another mainstream protocol that has existed for years on Infiniband,RoCE(RDMA over converged ethernet),and iWARP fabrics, and so building on RDMA was a way for NVM express to leverage these existing transport technologies. TCP/IP is the most mainstream network transport protocol with its solid design principles since the late 70s. It was natural for NVM express to develop a methodology for transporting NVMe commands over existing TCP networks, to lower deployment costs and faster setup time.

The emergence of NVMe-oF also brings about more challenges to IT infrastructure, since the bottleneck that existed with SCSI devices moves up the stack to network controllers and interfaces, but many companies have innovated in this space, with switches and NICs that support higher network speeds and highly tunable QoS. All flash array manufacturers have also innovated in offering an end-to-end NVMe-OF implementation with an array of tools to tune for better QoS and eliminate noisy neighbors.

5. Software-Defined Customers Make the Push

More Cloud customers are looking far beyond just read/write capabilities. They are now focusing on how to maximize compute-intensive services to provide to their end-users. From Cloud-based transcoding to intensive gaming applications, NVMe enables another performance tier. It also allows providers to justify the upfront cost of infrastructure investment by offering pricing models that are based on performance benefits. Existing software defined solutions like VMware vSAN and Ceph have seen tremendous development to enable support for NVMe devices, and transport of NVMe traffic between cluster nodes, to allow users to maximize their compute infrastructure with a highly scalable, lower cost storage implementation.

6. Form Factor is a Factor

One of the knocks to this point for NVMe is the ability to hot swap. While some manufacturers are introducing new form factors (including U.2, M.2, EDSFF, and E1.S), they will require data centers to do larger hardware overhauls. Without existing ports, they generally require new back planes and motherboard support. With Gen4 coming, PCIe Gen3 is still the preferred connection as only one CPU architecture supports Gen4. The U.2 form factor is the most recognized as it provides a very familiar 2.5″ form factor and server manufacturers have been pushing out front-loading U.2 chassis’ for quite some time. Form factor aside, U.2 is also hot swappable so it’s becoming prevalent in data centers.

7. Endurance and Predictable Performance Win Over Speeds and Feeds

While a vast majority of manufacturers today will highlight the peak IOPS and lowest latency figures on their data sheets, more data center managers are relying on consistency rather than peak performance. Overall, peak figures provide an assumptive performance value but with full utilization and load, they rarely meet those figures consistently. Thus, more data center managers want performance that is reliable and consistently high to get more predictability and dependability in their investment.

This year shall be a boon year for NVMe products – especially on the client side. NVMe will be the future of storage, however it’s still going to take a while for all the pieces to fall into place for most data center managers before they can completely migrate from SATA and SAS solutions.

#KingstonIsWithYou

Related Articles