Futuristic 3D graphic of glowing geometric tunnel one point perspective

Driving NVMe into the future

At the start of a new decade, there’s been a lot of buzz about the future of NVMe in the enterprise and cloud infrastructure. From talk about NVMe over Fabrics to new form factors and even PCIe 5.0, a vast amount of innovation is set to be unveiled this year.

The adoption of NVMe is just taking off despite the atmosphere surrounding new inventions, new product releases and novel approaches to implementation. While roll-out might be slower than analysts and journalists expect, we highlight seven of the predictions of what will drive NVMe adoption this year.

1. Efficiency will still be king

While most data center managers are focused on compute – the fact that NVMe can make things run more efficiently will force many to begin the switch. By having more available compute resources and faster transfer speeds, data centers can do more with less which makes the switch more attractive.

2. Adoption will increase with more off-the-rack solutions

Not every data center is created equal. Hyperscalers have enormous budgets to create custom components and don’t have to rely on off-the-shelf components to have the best or fastest products. As more manufacturers begin rolling out affordable NVMe storage solutions, we’ll start to see more tier one and tier two providers migrating. With the availability of commercial products, there’s an amount of market validation and testing that has already proven its worth.

3. RAID methods are going to change

Server racks with binary numbers floating between them and a bright reflection in the background

In theory, implementing NVMe unlocks the storage device from the hardware controller and gives you the performance far above what you could get with SATA and SAS. However, implementation opens a whole new can of worms when architectures have been reliant on hardware-based RAID controllers and redundancy practices. Hardware-based RAID controller manufacturers will need to adapt to the emergence of NVMe and offer solutions to connect to existing U.2 server backplanes to support HW-based NVMe RAID solutions. There are already a few RAID cards on the market that support NVMe, but the market is still new. With the HW-based RAID in elementary stages of development, when organizations make the switch to NVMe, architectural design decisions will have to be considered as they will need to explore how they will still meet their high-availability practices, whether through SW-based HCI solutions like vSAN, Ceph, Linux SW-based RAID or lvm mirroring, and application-based high-availability replication like SQL always on or Oracle ASM mirroring. One can argue that these SW-based design decisions should still exist with HW-based RAID controllers, since the latter only protects against a single point of failure.

4. NVMe over Fabrics will be a big deal

Abstract image of buildings with light paths on a road to represent data pathways

NVMe-oF enables centralised, shared access of NVMe devices over a specialised network (FC/RDMA/TCP), enabling access to the network-linked drive as if it were local to the client servers. The benefits of centralised storage management of NVMe include simplified management, better capacity utilisation and easier elimination of single points of failure. The NVMe-oF spec calls for either fibre channel, RDMA or TCP fabrics. Fibre channel protocol (FCP) has been the leading enterprise storage transport technology since the mid-1990s. Since it is used for transporting SCSI packets over fibre channel networks, it became pivotal for NVM express to define the new “FC-NVMe” protocol and make it possible to transport both SCSI and NVMe traffic over fibre channels, enabling existing users of FCP to upgrade to FC-NVMe. RDMA (Remote Direct Memory Access) is another mainstream protocol that has existed for years on Infiniband, RoCE(RDMA over converged ethernet) and iWARP fabrics, and so building on RDMA was a way for NVM express to leverage these existing transport technologies. TCP/IP is the most mainstream network transport protocol, with its solid design principles since the late 70s. It was natural for NVM express to develop a methodology for transporting NVMe commands over existing TCP networks, to lower deployment costs and accelerate setup time.

The emergence of NVMe-oF also brings more challenges to IT infrastructures, since the bottleneck that existed with SCSI devices moves up the stack to network controllers and interfaces. However, many companies have innovated in this space, with switches and NICs that support higher network speeds and highly tunable QoS. All flash array manufacturers have also innovated in offering an end-to-end NVMe-OF implementation with an array of tools to tune for better QoS and eliminate noisy neighbours.

5. Software-defined customers make the push

More cloud customers are looking far beyond just read/write capabilities. They are now focusing on how to maximise compute-intensive services to provide to their end-users. From cloud-based transcoding to intensive gaming applications, NVMe enables another performance tier. It also allows providers to justify the up-front cost of infrastructure investment by offering pricing models that are based on performance benefits. Existing software-defined solutions like VMware vSAN and Ceph have seen tremendous development to enable support for NVMe devices, and transport of NVMe traffic between cluster nodes to allow users to maximise their compute infrastructure with a highly scalable, lower-cost storage implementation.

Kingston DC1000M & DC1000B Server SSDs

6. Form factor is a factor

One of the knocks to this point for NVMe is the ability to hot swap. While some manufacturers are introducing new form factors (including U.2, M.2, EDSFF and E1.S), they will require data centers to perform larger hardware overhauls. Without existing ports, they generally require new back planes and motherboard support. With Gen4 coming, PCIe Gen3 is still the preferred connection as only one CPU architecture supports Gen4. The U.2 form factor is the most recognised as it provides a very familiar 2.5″ form factor and server manufacturers have been pushing out front-loading U.2 chassis for quite some time. Form factor aside, U.2 is also hot swappable and is therefore becoming prevalent in data centers.

7. Endurance and predictable performance win over speeds and feeds

While a vast majority of manufacturers today will highlight the peak IOPS and lowest latency figures on their data sheets, more data center managers are relying on consistency rather than peak performance. Overall, peak figures provide an assumptive performance value but with full utilisation and load, they rarely meet those figures consistently. Thus, more data center managers want performance that is reliable and consistently high to get more predictability and dependability in their investment.

This year shall be a boon year for NVMe products – especially on the client side. NVMe will be the future of storage. However, it’s still going to take a while for all the pieces to fall into place for most data center managers before they can completely migrate from SATA and SAS solutions.


Related Articles